text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Global Mental Health Meets Social Innovation: the HOW matters
Introduction Mental health conditions are rising globally, and COVID-19 has exacerbated the situation. We often think that massive money investments and training of specialized mental health providers, such as psychiatrists, will help alleviate the demand-supply challenge. But the reality is different. Despite all efforts over the last years, the mental health treatment gap, the percentage difference between the number of people needing treatment for mental illness and the number of people receiving treatment, is still 50+% in countries like Germany. The investment of money and the training of specialized mental health providers alone will not be sudcient to decrease this number. Objectives We need to learn from and with partners from low- and middle-income countries (LMIC), in which a shortage of resources has prevented a signihcant investment in mental health but also has inspired the innovation and implementation of novel approaches to decrease the mental health treatment gap. This reshaped approach allows us to move from Northern Ventriloquism (high-income countries teach LMIC what to do) to honest cross-cultural bidirectional learning. Furthermore, it will hll the “how” gap. Methods We know WHY we should act in the (global) mental health field. We also know WHAT we should do. The main question remaining is HOW we can implement any of the activities. To fill this “how” gap, the Dresden-based NGO On The Move e.V. designed an annual 8-week program funded by the European Union, which centers around a global mental health and social innovation curriculum and aims to create spaces of empowerment towards mentally healthier communities. Our participants come from four higher education institutions in Germany, Ghana, and Kenya. Results The program, which was recently awarded the TU Dresden internationalization award in the category “Innovative International Research Cooperation,” encourages participants to learn from and with each other. To enable an holistic approach to mental health and diversify the pool of mental health champions, the program includes participants from all fields. Since the start of the program, hundreds of culturally sensitive mental health-related Youtube videos have been recorded and distributed widely in the communities of the participants. The number of participant-led advocacy events has also increased. Conclusions Contextually, we will discuss core concepts, such as human-centered and community-based approaches, and how they relate to filling the “how” gap in our presentation. We might not have a blueprint of solutions in terms of decreasing the mental health treatment gap; however, our recommendations can support innovative and customized solutions. From a process perspective, we will compare existing global mental health training curricula with our curriculum and highlight transcultural learning opportunities; we will also discuss the elements of our program that empower our trainees. Disclosure of Interest None Declared
Introduction: Mental health conditions are rising globally, and COVID-19 has exacerbated the situation. We often think that massive money investments and training of specialized mental health providers, such as psychiatrists, will help alleviate the demand-supply challenge. But the reality is different. Despite all efforts over the last years, the mental health treatment gap, the percentage difference between the number of people needing treatment for mental illness and the number of people receiving treatment, is still 50þ% in countries like Germany. The investment of money and the training of specialized mental health providers alone will not be sudcient to decrease this number. Objectives: We need to learn from and with partners from low-and middle-income countries (LMIC), in which a shortage of resources has prevented a signihcant investment in mental health but also has inspired the innovation and implementation of novel approaches to decrease the mental health treatment gap. This reshaped approach allows us to move from Northern Ventriloquism (high-income countries teach LMIC what to do) to honest cross-cultural bidirectional learning. Furthermore, it will hll the "how" gap. Methods: We know WHY we should act in the (global) mental health field. We also know WHAT we should do. The main question remaining is HOW we can implement any of the activities. To fill this "how" gap, the Dresden-based NGO On The Move e.V. designed an annual 8-week program funded by the European Union, which centers around a global mental health and social innovation curriculum and aims to create spaces of empowerment towards mentally healthier communities. Our participants come from four higher education institutions in Germany, Ghana, and Kenya. Results: The program, which was recently awarded the TU Dresden internationalization award in the category "Innovative International Research Cooperation," encourages participants to learn from and with each other. To enable an holistic approach to mental health and diversify the pool of mental health champions, the program includes participants from all fields. Since the start of the program, hundreds of culturally sensitive mental health-related Youtube videos have been recorded and distributed widely in the communities of the participants. The number of participant-led advocacy events has also increased. Conclusions: Contextually, we will discuss core concepts, such as human-centered and community-based approaches, and how they relate to filling the "how" gap in our presentation. We might not have a blueprint of solutions in terms of decreasing the mental health treatment gap; however, our recommendations can support innovative and customized solutions. From a process perspective, we will compare existing global mental health training curricula with our curriculum and highlight transcultural learning opportunities; we will also discuss the elements of our program that empower our trainees. Introduction: Resident physicians compared to the general public are exposed to a more rigorous schedule. Burnout as described by the World Health Organization is a phenomenon occurring in an occupational setting. It consists of three domains: feelings of exhaustion, reduced professional efficacy, increased mental distance from one's own job. Research shows that increased working hours are associated with higher levels of burnout in resident physicians. Objectives: Through literature review we will explore whether this burnout contributes to an increased suicidal risk in the resident physician population. Methods: Various studies assessing training of residents globally were analyzed and compared. A study in Japan distributed a survey to 4306 resident physicians. Suicidal ideation was noted in 5.6% of these physicians but when working more than 100 hours in the hospital the rate increased to 7.8%. In Australia it was found that once doctors in training worked more than 55 hours per week there was was an increase of 50% in suicidal ideation. It was also found that 12.3% of the people surveyed in the Australian study had reported suicidal ideation within the past 12 months of the survey. A study observing 5126 Dutch residents found 12% of residents having suicidal ideation but double in the group with burnout vs the group without burnout. Results: The studies listed show that increased work hours and burnout was associated with increased suicidal ideation in medical residents. A study observing 1354 physicians in the US found that higher measurements of burnout were associated with suicidal ideation similar to previous studies. However once adjusted for depression, it was noted that there was an association with depression and suicidal ideation but not with burnout. Depression may be a confounding variable that may have not been adjusted for when determining the association of burnout with suicidal ideation. In addition further research looking at the leading cause of death among a total of 381,614 US medical residents between the years 2000 to 2014 found suicide as the second most common cause of death. It was however found when looking at resident physicians between the age of 25-34.9 there was 4.07 suicides per 100,000 person years while in the general public there was 13.07 suicides per 100,000 years.
Relationship between Residency Burnout and Suicidal Risk in the Resident Physician Population
Conclusions: The rate of suicide was found to be lower in resident physicians compared to the general public. Suicidal ideation may be more closely associated with depression versus burnout itself and should be accounted for when assessing suicidal ideation in the resident physician population. Suicide rates being lower in resident physicians compared to the general public bring up the possibility that burnout in resident physicians does not have to be directly correlated with increased risk of suicide.
EPP0399
Tunisian general practitioner's perception of benzodiazepine prescription: Introduction: Despite the scientific requirements and restrictive recommendations, there is a significant disparity between theory and practice in the prescription of benzodiazepine(BZD). Longterm prescribing, defined by a duration exceeding six months has been commonly reported worldwide, some authors explained this by physician's perceptions.
Objectives: This study aimed to evaluate the perception of general practitioners practicing in Tunis, in the private or public sector, concerning the prescription of BZDs. Methods: A cross-sectional study was conducted among general practitioners in the private and public sectors practicing in Tunis during the study period (September and October 2021). It is based on the response to a questionnaire, which focused on the perception of prescribing BZD, via google forms distributed to members of the regional committee of the order of physicians.
Results: A total of 75 physicians participated in the study. The mean age was 47.75AE 12,2 years, with 17,28 AE 11,8 years of clinical experience. Among the 75 participating physicians, 83% considered that patients on BZD patients had a better quality of sleep and 58% assert that patients on BZDs had restful sleep. 83% of participants agreed that BZDs were associated with fewer nocturnal awakenings 83% and 85% with a decrease in the feeling of irritability. 18% of the doctors think that the easiest way to manage a patient's anxiety is to prescribe a BZD. 24 doctors believe that chronic use of BZDs is essential to control anxiety. patients' anxiety. The number of years of practice is inversely correlated with the perception that the patient wakes up less at night (p= 0.059). Male gender correlates with the perception that it is acceptable to continue prescribing beyond the recommended duration as long as they are well tolerated (p=0.035).
Conclusions: BZD prescription decision in general medicine is complex. This study participates in increasing our level of understanding of the reasons behind the long-term prescription of this molecule. Objectives: To discuss the introduction of an Alternative Assessment Pathway at the RANZCP and share progress regarding the broader changes to assessment models under consideration. Methods: The AAP was co-designed with trainees and Specialist International Medical Graduates and introduced in December 2021 as an interim measure to assess clinical competence in the absence of an OSCE. It comprises a portfolio review of completed end-ofrotation forms and, if required, a case-based discussion held via Zoom. Results: The AAP has been held over two rounds (December 2021 and March 2022) with 97% and 90% pass rates respectively (data correct as at 6 October 2022). An evaluation into the pathway is currently underway. In July 2022, the RANZCP introduced the Clinical Competency Assessment as a continuation of the AAP (with some modifications) for the remainder of 2022 and 2023. Conclusions: While the pandemic has become a catalyst for change, the RANZCP has been considering broader changes to the theory and practice of its assessments for some time. The presentation will provide an overview of its short-term clinical assessment model and share progress regarding change to the long-term assessment strategy.
Disclosure of
Disclosure of Interest: None Declared european Psychiatry S323 | 2,604 | 2023-03-01T00:00:00.000 | [
"Psychology",
"Economics"
] |
Mock modularity from black hole scattering states
The exact degeneracies of quarter-BPS dyons in Type II string theory on K3 × T2 are given by Fourier coefficients of the inverse of the Igusa cusp form. For a fixed magnetic charge invariant m, the generating function of these degeneracies naturally decomposes as a sum of two parts, which are supposed to account for single-centered black holes, and two-centered black hole bound states, respectively. The decomposition is such that each part is separately modular covariant but neither is holomorphic, calling for a physical interpretation of the non-holomorphy. We resolve this puzzle by computing the supersymmetric index of the quantum mechanics of two-centered half-BPS black-holes, which we model by geodesic motion on Taub-NUT space subject to a certain potential. We compute a suitable index using localization methods, and find that it includes both a temperature-independent contribution from BPS bound states, as well as a temperature-dependent contribution due to a spectral asymmetry in the continuum of scattering states. The continuum contribution agrees precisely with the non-holomorphic completion term required for the modularity of the generating function of two-centered black hole bound states.
Introduction and summary
The statistical explanation of thermodynamic entropy of black holes is one of the remarkable achievements of string theory [1,2]. The emerging picture is that a black hole is a bound state of an ensemble of fluctuating strings, branes, and other fundamental excitations of string or M theory. This picture has been checked to great precision for supersymmetric black holes in superstring theory. The microscopic degeneracy in this case is captured by a supersymmetric index that counts all micro-states carrying the same charges as that of the black hole in a weakly coupled regime. This index is robust under small changes of moduli, which allows us to extrapolate the weak coupling result to strong coupling. The leading order result for the logarithm of the index at large charges is then found to match JHEP12(2018)119 the thermodynamic black hole entropy, with no adjustable parameter. This match can be pushed to higher order by computing and comparing the subleading corrections to both the macroscopic and microscopic results (see the review [3]).
One can go even further and try to compute the exact macroscopic quantum entropy of supersymmetric black holes using the formulation of quantum entropy in [4], and compare it with the logarithm of the microscopic degeneracy of states. For four-dimensional black holes preserving four supercharges in N = 8 string theory in asymptotic flat space (Type II string theory compactified on T 6 ), one can actually sum up all the macroscopic quantum corrections using localization and recover the exact microscopic integer degeneracy [5,6]. This result prompts us to look for such exact agreement in other systems and, in particular, in theories with less supersymmetry.
A crucial guide in this successful comparison of the exact microscopic and macroscopic entropy is the modular symmetry of the generating function of the degeneracies of BPS states [7]. The microscopic degeneracies 1 of 1 8 -BPS states in N = 8 string theory are Fourier coefficients of the ratio of powers of the Jacobi theta function and of the Dedekind eta function [10][11][12][13] Z N =8 micro (τ, z) = ϑ 1 (τ, z) 2 /η(τ ) 6 . The function Z N =8 micro is a weak Jacobi form [14] and, in particular, transforms covariantly under the modular group SL 2 (Z). This modular transformation property leads to an analytic formula for the microscopic degeneracy, known as the Hardy-Ramanujan-Rademacher expansion, which expresses the integer coefficient of a modular form as an infinite series of Bessel functions of exponentially decreasing magnitude. This series can be interpreted on the macroscopic side as an infinite sum over orbifold geometries with the same AdS 2 asymptotics [15,16], and each term in the sum can be recovered, using localization, as the functional integral of bulk supergravity fluctuations around the corresponding saddle point [5,6,17].
For the next-to-simplest case of 1 4 -BPS black holes in N = 4 string theories, it turns out, however, that the modular symmetry is not manifest. The microscopic degeneracy is again a Fourier coefficient of a certain automorphic form, namely the inverse of the Igusa cusp form discussed below, but it includes contributions both from a single, spherically symmetric BPS black hole as well as contributions from two-centered black hole bound states [18]. 2 In order to single out the single-centered black hole microstates, we need to remove part of the spectrum, thereby spoiling some of the symmetries. The observation of [20] was that the modular symmetry is not broken, but has an anomaly: the degeneracies of microstates of 1 4 -BPS black holes are coefficients of mock Jacobi forms, which are holomorphic but not modular. They can, however, be made modular at the cost of adding a correction term which is non-holomorphic in τ (but still holomorphic in z) [21]. This characterization allows one to generalize the Rademacher expansion and enables complete control over the growth of the Fourier coefficients [22][23][24]. It has also been used to make progress on the bulk interpretation of the microscopic degeneracies of black holes [25,26]. 1 In this paper the word 'degeneracy' refers to a suitable helicity supertrace that counts the net number of short multiplets with given charges. Under favorable circumstances, this may coincide with the actual number of states [8,9]. 2 In N = 8 string vacua, multi-centered configurations have too many fermionic zero-modes to contribute to the relevant spacetime helicity supertrace [19].
JHEP12(2018)119
We note that a similar phenomenon arises in the context of N = 2 black holes [27,28], but in that context mock modular forms of higher depth are expected to arise due to the occurrence of BPS bound states involving an arbitrary number of constituents [29,30]. In this paper, inspired by earlier work [31,32] in the context of N = 2 black holes, we attempt to give a physical justification of this non-holomorphic correction from the macroscopic point of view in the N = 4 context, by computing the contribution of the continuum of scattering states in the quantum mechanics of two-centered BPS black holes. The rest of the introduction contains a summary of the details of our problem and its proposed solution.
Dyon degeneracy function in N = 4 string theory and its decomposition
Consider Type II string theory on K3 × T 2 , a theory with four-dimensional N = 4 supersymmetry. The U -duality group of the theory is SO (22,6, Z) × SL(2, Z) [33,34]. There are 28 gauge fields with respect to which we have electric charges N i and magnetic charges M i , i = 1, · · · , 28. These charges transform as a vector under the T-duality group SO (22, 6, Z), and the electric and magnetic charges transform as a doublet under the S-duality group SL(2, Z). The T-duality invariants are (N 2 /2, M ·N, M 2 /2) ≡ (n, , m), where the inner product is with respect to the SO(22, 6, Z)-invariant metric. The degeneracy of 1 4 -BPS dyons in this theory depends on these T-duality invariants as well as the point φ in moduli space. The degeneracy is given as the Fourier coefficient [35][36][37]: where Φ 10 is the Igusa cusp form, the unique Siegel modular form of weight 10.
Here the contour C depends on the moduli φ as well as the charge invariants (which we have suppressed in the above formula) [38,39] (see [40] for a recent new perspective on this formula). Above we have used the terminology of "dyon degeneracy" as is common, but it should be understood that the left-hand side of the formula (1.1) refers to the index of states that preserve a quarter of the spacetime supersymmetry. In the near-horizon region of attractor black holes, it turns out that all the states that contribute to this index are bosonic and therefore this index is really a degeneracy [8,9], but more generally there can be cancellations between bosons and fermions. In particular one can show that the only gravitational configurations that have non-zero contributions to the supersymmetric index in this situation are 1) single-centered 1 4 -BPS dyonic black holes and 2) two-centered black holes, each of which is individually 1 2 -BPS [18]. This suggests that the generating function can itself be decomposed as a sum of single-centered black holes and two-centered black hole bound states. This intuition was made precise in the M-theory limit in [20], in which we must first expand the generating function in the region σ → i∞: 3 3 In the M-theory limit, following the contour C in (1.1) leads to Im(τ ) = cτ R , Im(z) = cz R , Im(σ) = cσR, with R → ∞, where cτ , cz, cσ are functions of charges and other moduli that are held fixed in the limit, such that Imz = − 2m Imτ .
JHEP12(2018)119
The Fourier-Jacobi coefficients ψ m (τ, z) are meromorphic Jacobi forms of weight −10 and index m with a double pole at z = 0 and no others (up to translation by the period lattice Zτ + Z). The meromorphy is a hallmark of a phenomenon known as wall-crossing: as we vary Im(z) space, the Fourier coefficients of the Jacobi form ψ m (τ, z) with respect to Re(z) jump when Im(z) crosses an integer multiple of Im(τ ). This corresponds precisely to the appearance or disappearance of the bound state of two 1 2 -BPS black holes across a real codimension-one wall in the space of moduli φ, and the jump in the right-hand side of (1.1) is precisely the degeneracy carried by that bound state [18,41].
Focussing on the case m > 0 relevant for genuine black holes, the contributions of two-centered bound states are captured by the function ψ P m (τ, z), called the polar part of ψ m constructed to have the same poles and residues as ψ m as well as the same elliptic transformations under shifts of z by Zτ + Z. Its explicit form is given by 24 , which gives the degeneracy of half-BPS black holes with gcd(N 2 /2, M · N, M 2 /2) = m [42], and A 2,m (τ, z) is the Appell-Lerch sum The contribution of single-centered black holes can be computed by evaluating (1.1) at the attractor point φ * . Following the contour C(φ * ) in the M-theory limit (see Footnote 3), we are led to the generating function ψ F m (τ, z), called the finite part of ψ m , defined as: 6) and the weight-1 2 index-m theta function is defined as: (1.7) With these definitions, one can now check that the meromorphic Jacobi form ψ m (τ, z) is the sum of its finite and polar parts [20]: Since the function ψ P m (τ, z) has the same poles and residues as ψ m , the function ψ F m (τ, z) is holomorphic in z, consistently with its interpretation as the generating function of singlecentered black holes degeneracies, which cannot exhibit any wall-crossing phenomena.
Mock Jacobi forms and the holomorphic anomaly
The nontrivial part of the above decomposition theorem is of course its implication for modularity. The additive decomposition of ψ m breaks modularity of the individual pieces and, in particular, ψ F m (τ, z) is not a Jacobi form any more. The theorem states that by adding a specific non-holomorphic correction term that we will discuss in section 2 (see Equation (2.14)), to ψ F m (τ, z), one can obtain a non-holomorphic completion ψ F m (τ, z) which is modular and transforms as a Jacobi form of weight −10 and index m. As the lefthand side of (1.8) is a Jacobi form, it is clear that subtracting the same non-holomorphic correction term from ψ P m (τ, z) also gives a function ψ P m (τ, z) that transforms as a Jacobi form of the same weight and index. In other words: where both summands are non-holomorphic but modular. The failure of holomorphy of the completions ψ F m (τ, z) and ψ P m (τ, z) is captured by the following equation (with τ 2 = Im(τ )): (1.10) The fact that the completed partition function ψ F m transforms like a holomorphic Jacobi form suggests that it should be identified with the elliptic genus of the five-dimensional black string that descends to the black hole upon compactification on a circle. It was speculated in [20] that the non-holomorphic dependence on τ is caused by the non-compactness of the target space of the SCF T 2 , similar to the phenomenon studied in [43][44][45]. Unfortunately, a detailed implementation of this idea has remained elusive. In this paper we focus instead on the two-centered piece ψ P m , and investigate the physical origin of its non-holomorphic dependence.
Moduli space of two-centered black holes and continuum contribution
Consider the (n, )th Fourier coefficient d P (n, , m) of the function ψ P m (τ, z) (1.3) with respect to the potentials (Re(τ ), Re(z)). This coefficient depends on the value of Im(z) because of the meromorphy of ψ P m . For a given value of Im(z), determined by the values of the moduli at spatial infinity, it is expected to compute the Witten index of the supersymmetric quantum mechanics of two-centered BPS black holes with total charge invariants (n, , m). This interpretation has been checked very precisely: for fixed magnetic charge invariant m, the walls of marginal stability of two-centered bound states in the M-theory limit precisely correspond to the poles in z of the Appell-Lerch sum (1.4). All these walls can be mapped, by S-duality, to the wall at Im(z) = 0 across which a basic two-centered bound state consisting of a purely electric 1 2 -BPS black hole with charge invariant n and a purely magnetic where we have explicitly shown the dependence, discussed above, of the Fourier coefficient on Im(z), through the variable u 2 := Im(z)/τ 2 . The invariant corresponds to the field angular momentum in this bound state configuration, and the pole in z corresponds to the (dis)appearance of this bound state across a wall of marginal stablity. As we describe in section 2, the full generating function ψ P m (τ, z) is given by the sum over all S-duality images of this basic two-centered function ψ basic m (τ, z), and for this reason it is enough to focus our attention on the latter. Its Fourier coefficient is computed by the supersymmetric index d basic m (n, ; u 2 ) = Tr basic, bound (n, ,m) where the trace is taken over the bound state spectrum of the quantum mechanics describing the basic two-centered black hole configuration at the given value of u 2 . Since these bound states are normalizable and discrete, the trace reduces to a sum over the supersymmetric ground states. The idea that we pursue in this paper is that the completed polar part ψ P m (τ, z) should similarly arise from a supersymmetric partition function d basic m (n, ; β, u 2 ) = Tr basic, all (n, ,m) (−1) F e −βH (1.13) which includes contributions of the full spectrum in this same quantum mechanics. Here β is the inverse temperature and H is the quantum Hamiltonian of the two-centered configurations. The contributions of the bound state spectrum is of course independent of β and equal to (1.12), since only supersymmetric ground states with H = 0 contribute, but now there can be an additional contribution from the continuum spectrum, since the densities of bosonic and fermionic states need not be equal. We define the corresponding generating function ψ basic m (τ, z), where β is identified with 4πτ 2 . Averaging as before over all the S-duality images, we should recover the completed function ψ P m (τ, z) in (1.9). The quantum dynamics of the two-centered black hole bound state is not completely understood. In the context of black holes in N = 2 string vacua, it is well-described by the quiver quantum mechanics with 4 supercharges introduced in [46], or more simply by the supersymmetric quantum mechanics on R 3 which arises on its Coulomb branch [32,46,47]. In that case, 1 2 -BPS bound states arise from supersymmetric vacua in the quantum mechanics on R 3 describing the relative motion, while the 4 fermionic zero-modes come the center-of-motion degrees of freedom. Similarly, in the N = 4 context relevant for this paper, one would like to construct an analogue of the supersymmetric quantum mechanics on R 3 with 8 supercharges, such that 8 of the 12 fermionic zero-modes carried by 1 4 -BPS bound states arise from the center-of-motion degrees of freedom, while the remaining 4 correspond to the unbroken supersymmetries in the quantum mechanics describing the relative motion. While such a model does not appear to be documented in the literature, we shall obtain it by reducing a supersymmetric sigma model with 8 supercharges on Taub-NUT space, which is known to describe dyonic bound states in weakly coupled supersymmetric gauge theories [48][49][50]. One considers the dynamics of two 1 2 -BPS dyons of charge (Q 1 , P 1 ) and (Q 2 , P 2 ) on a sublocus of the Coulomb branch where the corresponding central charge JHEP12(2018)119 vectors are parallel, so that there are no static forces between the two dyons. Factoring out the center of motion, the dynamics captured by geodesic motion on the reduced monopole moduli space. When P 1 , P 2 are associated to two consecutive nodes on the Dynkin diagram associated to the gauge group G, this moduli space turns out to be the Taub-NUT manifold M TN , with metric (1.14) Here, r ∈ R 3 is the relative position of the two dyons, ψ ∈ [0, 4π] is the relative angle associated to large gauge transformations, and A is a connection along the circle fiber parametrized by ψ such that ∂ i H = ijk ∂ j A k . The parameter R controls the radius of the circle fiber at infinity, and is proportional to the square of the magnetic charges, while the momentum along the circle fiber is identified with the Dirac-Schwinger-Zwanziger pairing Q 1 P 2 − Q 2 P 1 . Away from the locus where the corresponding central charge vectors are parallel, the dynamics is still given by geodesic motion on M TN , but now subject to a potential proportional to the squared norm of the (tri-holomorphic) Killing vector ∂ ψ , with a coefficient that we denote by λ 2 . We find that the function ψ basic m (τ, z) is indeed encoded in this quantum mechanical system, but in a subtle manner. We need to introduce a third parameter u 2 , which corresponds to a three-variable generalization [21,51] of the two-variable Appell-Lerch sum in (1.4). Upon identifying this third parameter with the coupling constant λ introduced above as u 2 = −λR, we find that the Fourier coefficients of the three-variable function are reproduced by a suitable index in the above quantum mechanical system, but only in the attractor chamber where sign(u 2 ) = −sign( ). In particular, this index, which we introduce in section 3.3, and compute by localization methods in section 4, precisely reproduces the non-holomorphic completion term that is required for modularity.
The plan of this paper is as follows. In section 2 we discuss the microscopic partition function of the black hole bound states, and how it can be understood as a sum of S-duality images of the basic bound state partition function. We then discuss the appearance of the Appell-Lerch sums and their non-holomorphic modular completions, and introduce a threevariable generalization. In section 3 we discuss the supersymmetric quantum mechanical system which we use to model the dynamics of the basic black hole bound state and discuss a set of refined indices which get only contributions from short multiplets. In section 4 we compute the refined index using localization, and discuss the relation of this result to the microscopic partition functions for the black hole bound states. In section 5 we summarize and discuss some puzzles and open questions. Appendix A contains a suggestive attempt to compute the spectral asymmetry directly by Hamiltonian methods, eschewing a full analysis of the quantum mechanical model.
Black hole bound states and Appell-Lerch sums
In this section we explain the physics and the mathematics of the two-centered black hole bound state partition function ψ P m (τ, z). Then we present some Fourier expansions of the
JHEP12(2018)119
Appell-Lerch sums. Finally we discuss the mathematics of the non-holomorphic parts in some detail.
Basic two-centered black hole bound state and its decay
We first consider a system of two 1 2 -BPS black holes where one center has purely electric charge ( N , 0) and the other purely magnetic charge (0, M ). The degeneracy of the internal states carried by the first center is d(n) ≡ p 24 (n + 1), which is the Fourier coefficient of the generating function [42] 1 (2.1) By S-duality, the degeneracy of the internal states carried by the second center is d(m).
Depending on the values of the moduli at infinity, the quantum mechanics of the relative degrees of freedom has either no supersymmetric ground states, or | | of them, where = M · N is the Dirac-Schwinger-Zwanziger product of the charges of the constituents, transforming as a multiplet of spin ( − 1)/2 under spatial SO(3) rotations [52]. The tensor product of the configurational and internal degrees gives | | d(n) d(m) BPS bound states of total charge (M, N ).
We now consider a generating function of degeneracies with fixed magnetic charge invariant m and arbitrary electric charge invariants n and , with chemical potentials τ and z, respectively. In the chamber where only bound states with > 0 are allowed, the contribution of the above bound states is then where ζ = e 2πiz . In contrast, in the chamber where only bound states with < 0 are allowed, the contribution of the bound states is 3) The first and second factors are the internal degeneracies of the half-BPS magnetic and electric centers, respectively, as explained above. The third factor in (2.2) and (2.3), taking into account configurational degrees of freedom, is the Fourier expansion of the meromorphic function The basic wall-crossing of the theory is clear from the above two equations: for a fixed value of , the degeneracy jumps across the wall Im(z) = 0, which is the image in complex z-space of the wall in moduli space across which the two-centered bound state with the given value of decays or is created.
S-duality and the sum over all wall-crossings
In N = 4 string theory, one can map all the codimension-one walls of marginal stability in moduli space [18].
These walls can be mapped to the plane of the four-dimensional complex modulus 4 S = S 1 + iS 2 ∈ H. In the upper-half S-plane, the walls are either straight lines intersecting the S 1 -axis at the integers, or minor circular arcs intersecting the S 1 -axis at consecutive integers. The analysis of [20] is performed in the M-theory limit, in which the radius R of the M-theory circle is taken to be large keeping other scales in the problem fixed. In this limit, the modulus scales as S 2 ∼ R, and as a consequence, the only relevant walls in this limit are the straight lines. The basic wall at z = 0 maps to the vertical line at S 1 = 0. The other straight lines are images of this line under the S-duality transformation γ = 1 s 0 1 , s ∈ Z, and are therefore associated to the decay The number of configurational BPS ground states on a suitable side of this wall is N 1 · M 2 − N 2 · M 1 = − 2ms, while the electric charge invariant for the purely electric constituent is N 1 2 /2 = n + s 2 m − s . The S-duality transformation parameterized by the integer s can thus be identified with the elliptic transformation z → z + sτ acting on Jacobi forms of index m.
The full generating function that captures all bound states relevant in the M-theory limit is therefore obtained by summing over the elliptic transformation images of (2.4). This is achieved by the operator: which sends any function of ζ of polynomial growth in ζ to a function of ζ transforming like an index m Jacobi form under translations by the full lattice Zτ + Z [20]. Applying this to the function (2.4) leads to the Appell-Lerch sum:
JHEP12(2018)119
The moduli dependence of the Fourier coefficients of Appell-Lerch sum A 2,m is apparent in the following Fourier expansion, valid when u 2 ≡ Im(z)/Im(τ ) is not an integer: Note that the ambiguity of sign( ) at = 0 is irrelevant since this term does not contribute to the sum.
Thus the final answer for the full generating function of two-centered black hole bound state degeneracies is precisely the polar part of meromorphic Jacobi form ψ m discussed in the introduction: (2.10)
Non-holomorphic modular completion
The completion A 2,m of A 2,m is defined as: with ϑ * m, given by the non-holomorphic Eichler integral of ϑ m, [20]: The completion A 2,m transforms as a Jacobi form of weight 2 and index m [20,21]. Given that 1/η(τ ) 24 is a modular form of weight −12, we have that completion of the two-centered generating function (2.14) Putting together the above defining equations of A 2,m (τ, z), we can rewrite it as: In this summation, = 2mλ runs over all integers, while the constraint r ≡ (mod 2m) is equivalent to r ≡ (mod 2m). We solve this constraint by setting r = 2ms + with s ∈ Z. Dropping the prime on , we obtain
JHEP12(2018)119
Combining Equations (2.9) and (2.16), the full completed Appell-Lerch sum is given by (2.17) In this form, the modular invariance of (2.17) is a straightforward consequence of Vignéras's criterion for the modularity of indefinite theta series [53].
Three-variable Appell-Lerch sum
The two-variable completed Appell-Lerch sum (2.17) can in fact be obtained by acting with a suitable derivative operator on the weight-one indefinite theta series with two elliptic parameters Thus, the derivative A 1,m (τ, z, z) reduces to A 2,m (τ, z) at z = 0. The quantity a (τ 2 , u 2 , u 2 ) defined in (2.20), which appears as the Fourier coefficient of the term in (2.19) with s = 0, is the one which we shall be able to obtain from an index computation in the supersymmetric quantum mechanics of the basic black hole bound state. More precisely, we shall identify its value at the attractor point u 2 = − /2m with a suitable index (4.20) receiving contributions both from discrete states and from the continuum of scattering states. We do not know yet how to recover (2.20) away from the attractor chamber, since we have not been able to identify the effect of the variable u 2 = Imz/τ 2 on the supersymmetric quantum mechanics. We note, however, that the Fourier coefficient in the non-holomorphic correction term (2.16) is independent of u 2 , and is entirely reproduced by the limit of (2.21) as u 2 → 0,
Moduli space dynamics of two-centered black holes
In this section we review the supersymmetric quantum mechanics that captures the relative low-energy dynamics of the dyonic bound states. The bosonic part corresponds to geodesic motion on Taub-NUT space, subject to a suitable potential. We briefly review the known spectrum of BPS bound states and the relevant indices which are sensitive to them.
Classical dynamics of mutually non-local dyons
As mentioned in the introduction, the relevant properties of 1 4 -BPS black hole bound states in N = 4 string vacua are captured by the supersymmetric quantum mechanics describing the dynamics of two 1 2 -BPS dyons in weakly coupled four-dimensional N = 4 Super Yang-Mills theories with gauge group SU(3), carrying magnetic charges associated to the two simple roots of SU(3). This problem has been intensively studied in the literature [48][49][50]55] using a two-step procedure: first by considering a point on the Coulomb branch where the six adjoint Higgs fields in the Cartan algebra of SU(3) are aligned, and then perturbing away from this locus. When the Higgs fields are aligned, the classical theory reduces to SU(3) Yang-Mills theory with a single adjoint Higgs field. In this case, the two dyons do not experience any static forces, and their relative motion of two dyons with is governed by geodesic motion on Taub-NUT space M TN with metric (1.14). In units where the reduced mass is set to 1, the Lagrangian is simply where H(r) = 1 R + 1 | r| and ψ ∈ [0, 4π] parametrizes the circle fiber at infinity. Denoting by p and 5 q ∈ Z/2 the canonical momenta conjugate to r and ψ, the Hamiltonian describing this geodesic motion is then where A is the potential for a unit-charge Dirac monopole sitting at r = 0. The momentum q is equal to half the Dirac-Schwinger-Zwanziger pairing of the two dyons, and we shall restrict our attention to q = 0, corresponding to the mutually non-local case. The potential JHEP12(2018)119 V = 1 2 Hq 2 being monotonically decreasing towards spatial infinity, this system admits no bound states, but only scattering states.
Upon perturbing away from the single-Higgs field locus, it has been shown that the two dyons start experiencing static forces, such that their relative motion is described by motion on the same Taub-NUT space with an additional potential term proportional to the square of the Killing vector ∂ ψ . This potential being invariant under translations along the fiber, the momentum q is still conserved and the relative dynamics is now described by the Hamiltonian where λ measures the distance away from the single-Higgs field locus. At the classical level, it is straightforward to see that the potential V = H 2 q 2 + λ 2 2H admits bound states whenever |λ| > |q/R| is large enough, localized around the global minimum at In either case, the ground state energy is V (r 0 ) = |λq| (independently of R), corresponding to a binding energy where E c = lim r→∞ V (r) = 1 2 q 2 R + λ 2 R . Note that (3.5) holds provided that bound states exist, namely qϑ + > 0 or qϑ − < 0, and that the sign ± is equated with the sign of qλ. In addition, as in the case of the hydrogen atom, we expect an infinite number of discrete bound states with energy ranging between E = |λq| and E c . If instead |λ| < |q/R| is too small, the potential is monotonically decreasing towards infinity, and there are no classical bound states. Thus, as the parameter λ is varied from −∞ to +∞, bound states disappear when λ crosses the value −|q/R| and reappear when it crosses |q/R|. In addition, irrespective of the value of λ, the classical spectrum admits a continuum of scattering states with energy E ≥ E c .
Bosonic quantum mechanics
We now briefly discuss the spectrum of the quantum Hamiltonian obtained by replacing p by i∂/∂ r in (3.3). The resulting operator commutes with the angular momentum operator In a sector with J 2 = j(j + 1) and J 3 = m, the wave function Ψ( r) factorizes into a radial part f (r) and a monopole harmonic Y q,j,m with with ∈ N the orbital angular momentum. The radial part of the Schrödinger equation HΨ = EΨ is then , we find that (3.8) reduces to the Whittaker equation The solutions are linear combinations of Whittaker functions, In order for the wave function to be regular at the origin, the coefficient γ must vanish. For normalizable bound states, the parameter µ (hence the radial wave number k) can only take discrete values in order for the wave function to decay at infinity. Using the standard formula and W (z) ∼ z λ e −z/2 as |z| → ∞, we see that this happens when Γ(µ + ν + 1 2 ) has a pole, i.e. 6 R 2 k 2 n − 2q 2 2R ϑ 2 − k 2 n = j + n + 1 , n ∈ N , (3.13)
JHEP12(2018)119
where we recall that j = |q| + . As expected in a bosonic model, the ground state = n = 0, transforming as a spin |q| representation of SU(2), have energy strictly bigger than the minimum V (r 0 ) = |qλ| of the potential .
In contrast, for scattering states, the radial wave number can take arbitrary values k > ϑ. The S-matrix in an angular momentum channel j is easily read off from (3.12), (3.14) The density of states in the continuum (relative to the density of states for a free particle in R 3 ) is related to the phase of the S-matrix via ρ(k)dk = 1 π d[Im log S(k)]. The thermal partition function for a spinless mode, including contributions from the continuum, is then It is worth noting that this expression is formal since the sum over the orbital angular momentum diverges. We shall regulate this divergence by imposing a cut-off at ≤ m .
Supersymmetric quantum mechanics
Taking into account fermionic zero-modes associated to the supersymmetries broken by the two dyons, the classical dynamics must be described by a supersymmetric extension of the previous model with 8 supercharges [48]. One way to find the supersymmetric extension of the Lagrangian (3.1) is by dimensional reduction of a two-dimensional (4, 4) sigma model on a hyperKähler manifold. As shown in [57,58], such a model can be deformed by adding a potential proportional to the norm squared of a tri-holomorphic vector field. Alternatively, one may start from the undeformed model in two-dimensions but perform the dimensional reduction with Scherk-Schwarz twist [59]. The resulting one-dimensional model admits a supersymmetry algebra with a central term [48], where the indices α, β run over {1, 2} while the indices µ, ν run over {1, . . . , 4}, corresponding to the four directions on the tangent space of the HK manifold. Defining Q µ ± = (Q µ 1 ± Q µ 2 )/ √ 2, this can be rewritten as In view of their two-dimensional origin, we shall refer to Q µ + and Q µ − as the right-moving and left-moving supercharges, respectively. In addition to the usual fermionic parity (−1) F ,
JHEP12(2018)119
which anticommutes with both Q µ + and Q µ − , the model admits two Z 2 -gradings 7 which we shall denote by (−1) F ± , such that (−1) F = (−1) F + (−1) F − . The operators (−1) F ± anticommute with Q µ ± but commute with Q µ ∓ , in line with the fact that they descend from the fermionic parities on the right-moving and left moving side in two-dimensions.
For the model (3.3) of interest, the central charge is Z = λq, which we assume to be non-zero. The classical ground states described in (3.4) lead to BPS states annihilated by Q + when λq > 0, or by Q − when λq < 0. In either case, they obtain 4 fermionic zeromodes from the broken supersymmetries of the quantum mechanics describing the relative motion (as well as another 8 from the center-of-mass motion, reproducing the 12 fermionic zero modes of a 1 4 -BPS bound state in the four-dimensional N = 4 theory). Moreover, the highest weight vector in the supersymmetric multiplet carries angular momentum |q| − 1 2 , with |q| originating from the magnetic term in (3.6) and − 1 2 from the spin degrees of freedom. It follows that the indices are given by These indices agree with the Dirac indices computed by localization with respect to the action of the Killing vector ∂ ψ in [60]. One can refine these indices by introducing a fugacity conjugate to conserved charges commuting with the supercharge as follows. Using the terminology of the two-dimensional (4,4) sigma model, we first note that the algebra (3.17) is invariant under independent SO(4) rotations of the left and right-moving charges. These are a priori outer automorphisms of the algebra, but it turns out that certain combinations are symmetries of the Hamiltonian. Writing SO(4) = SU(2) × SU(2) on the right-moving side, we define J + and I + as, respectively, half the sum and half difference of the Cartan generators of SU (2) and SU (2). Similarly, we define J − and I − as half the sum and difference of the two Cartan generators on the left-moving side. The operators (−1) 2J ± are the Z 2 gradings mentioned previously, while J = J + + J − is identified with the Cartan generator of the SU(2) rotational isometry of the Taub-NUT space, corresponding to the physical angular momentum of the two-centered system. In addition, there is a conserved charged q corresponding to translations along the circle direction ψ.
The representations of the supersymmetry algebra (3.17) are obtained by tensoring representations of the left-moving and right-moving algebras. If E > |Z|, the irreducible representations on both sides have dimension 4, and carry the charge assignments given in table 1. Using the fact that Tr(−1) 2J ± y 2(J ± +I ± ) = 0 on either of these representations, it is immediate to see that the resulting long representations, of dimension 16, do not contribute
JHEP12(2018)119
to either of the following traces,
19)
I − (λ; y, v) = Tr q (−1) 2J e −β(H+qλ) y 2(J+I − ) e 4πivI + , (3.20) where the trace is taken over the discrete spectrum in the sector with charge q. If instead E = qλ > 0, the right-moving representation is one-dimensional, and carries I + = J + = 0, while the left-moving representation is the one given in table 1. The resulting short representations, of dimension 4, do not contribute to I − q , but it does contribute to I + q with a term proportional to Tr(−1) 2J − y 2J − e 4πivI − = 2 cos(2πv) − y − y −1 . Similarly, if E = −qλ > 0, the representation on the left-moving side is one-dimensional, and carries I − = J − = 0. The resulting short representations do not contribute to I + , but it does contribute to I − , with a term proportional to 2 cos(2πv)−y −y −1 . In either case, the result is independent of β. Using the fact that the highest weight vector in the representation carries angular momentum |q| − 1 2 , we find is the character of a spin j representation of SU(2) (we set χ j = 0 whenever j < 0). In this expression, the prefactor vanishes unless q(Rλ − q) > 0, in which case it gives −1. Note that this result vanishes at y = 1, v = 0, in agreement of the vanishing of the Witten index I = 0. However its second derivative with respect to y I + (λ) = −2 y d dy 2 I + (λ; y, 0) y=1 (3.22) happens to agree with the result for I + in (3.18). Similarly, the refined index I − is given by (3.23) whose second y−derivative at y = 1, v = 0, happens to agree with the result for I − in (3.18). This observation suggests that the exotic indices I ± = Tr(−1) F ± may be related to more standard indices, where states are counted with the physical fermionic parity (−1) F = (−1) 2J .
Rather than considering the refined indices I ± (λ; y), which involve a fugacity both for the angular momentum J and R-charge I ± , one may consider the helicity partition function with a fugacity y conjugate to the physical angular momentum. Unlike the refined indices (3.19), (3.20), this trace receives contributions from long representations, given by
JHEP12(2018)119
State where is the orbital angular momentum (not to be confused with the summation variable appearing in section 1) . Moreover, short multiplets contribute in the same way to I(λ; y) and I + (λ; y, 0) when λq > 0, or to I(λ; y) and I − (λ; y, 0) when λq < 0, and in both cases carry zero orbital angular momentum. It follows that the contributions of short multiplets is given by where the prefactor ensures that I(y) vanishes unless R|λ| > |q|, which is the range where bound states exist. It is easy to check that (3.25) is of order (y −1) 4 near y = 1, while (3.26) is of order (y − 1) 2 . It follows that the second derivative at y = 1, also known as the helicity supertrace, receives only contributions from short multiplets, coincides with one quarter of the sum of the indices I ± in (3.18), As we have shown, the refined indices I ± (λ; y, v), defined in (3.19), (3.20) as a trace over the discrete spectrum, get contributions only from short BPS states, and are independent of the temperature β. Upon including the contribution of the continuum of scattering states in the trace, then the contribution from bosons and fermions need no longer cancel perfectly, and the resulting indices, which we denote by I ± q (λ; β, y, v), may acquire a dependence on β. The density of bosonic and fermionic scattering states can in principle be calculated as in Equations (3.14), (3.15) from the knowledge of the S-matrix, but this requires diagonalizing the action of the Hamiltonian on the 16 helicity states, which is cumbersome. 8 In the next section, we shall calculate I + q (λ; β, y, v) using the method of supersymmetric localization. We shall recover the contribution of the bound states discussed in this section, as well as the contribution from the continuum, which we compare with the microscopic prediction.
JHEP12(2018)119 4 Supersymmetric partition function from localization
In this section we compute the refined index (3.19) for the quantum mechanics with 8 supercharges described in the previous section, using localization in a gauged linear model that flows in the infrared to the model of interest. We find that the result reproduces the expected contributions of short multiplets in the discrete spectrum, plus a β-dependent contribution which can be ascribed to a spectral asymmetry in the continuum. We compare the result with the microscopic answer given in Equation (2.21) and find agreement for the discrete contribution upon a suitable identification of moduli. The same identification then leads to the correct non-holomorphic term as well.
Localization in the two-dimensional (4,4) sigma model on Taub-NUT
In the context of two-dimensional (4,4) sigma models, the elliptic genus of Taub-NUT space M TN was computed in [59] by localization in a two-dimensional gauged linear model which flows to the non-linear (4,4) sigma model on M TN . This gauged linear sigma model simply involves two free hypermultiplets (q 1 , q 2 ) ∈ H 2 and one vector multiplet gauging the non-compact symmetry (q 1 , q 2 ) → (e it q 1 , q 2 + νt) [61,62]. At low energy, the model flows to a sigma model on the hyperKähler quotient H 2 ///R, which is well-known to be Taub-NUT space. In particular, the triholomorphic U(1) isometry and the rotational SU(2) isometry of M TN simply descend from the circle action (q 1 , q 2 ) → (e iα q 1 , q 2 ) and action of the unit quaternions (q 1 , q 2 ) → (pq 1 , pq 2 p) with pp = 1, which commute with the gauge symmetry [63, §3.1]. The authors of [59] considered the refined elliptic genus 9 where H RR is the Hilbert space on the cylinder in the Ramond-Ramond sector (including both normalizable states and states in the continuum), L 0 , L 0 are the zero-modes of the Virasoro generators on the cylinder, q is the charge under the triholomorphic U(1) action, and q 1 , q 2 , q 3 are the charges under the Cartan generators of SU(2) 1 ×SU(2) 2 ×SU(2) 3 , where SU(2) 1 is the action of the unit quaternions above, while SU(2) 2 × SU(2) 3 is the standard R-symmetry of two-dimensional (4,4) sigma models. To see that the observable (4.1) is protected, note that supercharges transform as (2, 1, 2) − ⊕ (2, 2, 1) + under SU(2) 1 × SU(2) 2 × SU(2) 3 (where the subscript indicates the two-dimensional helicity), therefore as Thus, there exists one supercharge which commutes with SU(2) L ×SU(2) 3 , allowing for chemical potentials conjugate to q 1 + q 2 and to q 3 . Using the localization techniques for (0, 2) sigma models developed in [64] one finds [59, (3.16)]: 2) where u = u 1 + iu 2 , which encodes the holonomies of the vector multiplet, is integrated over the Jacobian torus E(τ ) = C/(Z + τ Z). The parameter R, denoted by g 2 in [59], will JHEP12(2018)119 be related to the radius R of Taub-NUT shortly. In this localisation computation, it is important to keep the parameter ξ 2 non-zero, since otherwise the two simple poles in the denominator would collide into a double pole, leading to a logarithmic divergence of the form dudu 1 |u| 2 . For ξ 2 = 0, the simple poles are integrable, and the result is manifestly holomorphic in v, albeit not in τ, ξ 1 nor ξ 2 .
Localization in the quantum mechanics with 8 supercharges on Taub-NUT
In principle, the localization techniques of [64] apply just as well to sigma models with 2 supercharges in one dimension [65], with several complications due to the fact that the holonomies of the vector multiplet now live in an infinite cylinder, rather than on a compact torus. Alternatively, one may start from the two-dimensional sigma model and keep only the contributions from the center of mass modes and remove the contribution of the oscillator modes [66,67]. The observable (4.1) becomes where H = 1 2 (L 0 + L 0 ) is the Hamiltonian for the zero-modes. Setting β = 4πτ 2 , y = e −2πiξ 2 , ξ 1 = ξ r 1 + iλτ 2 , and identifying we recognize the generating function of the indices (3.19) discussed in the previous section -where the trace in (4.3) a priori includes contributions both from normalizable states and from the continuum. The identification Im(ξ 1 ) = τ 2 λ is motivated by the fact for this choice, the first two exponential factors in (4.3) recombine into e −β(H−Z) with central charge Z = qλ, as in (4.3). The fact that switching on an imaginary part for the chemical potential ξ 1 conjugate to the momentum along the triholomorphic isometry induces a scalar potential proportional to the square of the Killing vector is not obvious and will be justified a posteriori. In order to obtain the localized functional integral for our one-dimensional sigma model, we first recall the origin of the various terms in the two-dimensional computation of [59] leading up to (4.2). The variables u 1 , u 2 living on the torus are the values of the Wilson lines of the gauge fields that parameterize the localization manifold. The sum over p, w is the classical contribution of momentum and winding modes of the worldsheet around the compact direction in target space. The ratio of Jacobi theta functions arises from the quadratic fluctuation determinant in the directions orthogonal to the localization manifold. In our analogous one-dimensional computation, the variable u 1 is a Wilson line of the gauge field while the variable u 2 is now interpreted as the zero mode of a scalar field which can take values in the real line [65]. Thus the integral over the torus E(τ ) reduces to an integral over a cylinder of unit radius. In the classical contribution we only have momentum modes and all the w = 0 modes are discarded. In the one-loop contribution, discarding the JHEP12(2018)119 oscillator modes and keeping only the center of mass modes means that the Jacobi theta function reduces to a trigonometric function θ 1 (τ, u) → 2q 1/8 sin πu. We thus arrive at (4.6) where u 1 ∈ [0, 1], u 2 ∈ R. As in (4.2), it is important to keep ξ 2 = 0 in this computation, since otherwise the double pole would lead to a logarithmic divergence. 10 As a result, (4.6) is manifestly holomorphic in v but not in ξ 1 , ξ 2 .
JHEP12(2018)119
where ξ 1 = ξ r 1 + iξ i 1 , ξ 2 = ξ r 2 + iξ i 2 , and the notation [ · ] + denotes the even part of a function with respect to ξ 2 , namely We want to rewrite this expression as a Fourier expansion in ξ r 1 . The effect of pulling the three terms in the first parenthesis inside the summation symbol is to shift the value of n in e 2πin(u+ξ 1 ) to n + 1, n, n − 1, respectively. For |n| > 1, this shift can be absorbed by a corresponding change of the summation variable, because sign(n) = sign(n ± 1) for these values. For the remaining values n = 0, ±1, this shift changes the expression, but by odd function of ξ 2 which does not contribute to the even part. We thus arrive at the expansion: e 2πi(n−1)ξ 2 −2 cos 2πv e 2πinξ 2 +e 2πi(n+1)ξ 2 2i sin 2πξ 2 + . (4.13) Now, the integral over u 1 in (4.7) identifies the summation variable n with 2q. The integral over u 2 splits into two pieces -the first one, proportional to sign(n) is gaussian, and the second part can be computed using (4.14) In this way we arrive at the Fourier expansion of (4.6) with respect to ξ r 1 : with e 2πi(2q−1)ξ 2 −2 cos 2πv e 4πiqξ 2 +e 2πi(2q+1)ξ 2 e 2πiξ 2 −e −2πiξ 2 + . (4.16) The expression (4.15) is then the result for the refined index defined in (3.19), where the trace includes both discrete states and states in the continuum.
Interpreting the result
Performing identifications anticipated above (4.4), assuming for the moment that ξ 2 is real (i.e. ξ i 2 = 0) and further setting R = 2R, the result (4.16) becomes × χ |q| (y) − 2 cos(2πv) χ |q|− 1 2 (y) + χ |q|−1 (y) with y = e −2πiξ 2 and β = 4πτ 2 . In the limit β → +∞, this reduces to I q (λ; ξ 2 , v) = [1 − sign (q) sign (q − Rλ)] χ |q| (y) − 2 cos(2πv) χ |q|− 1 2 (y) + χ |q|−1 (y) , (4.18) in perfect agreement, up to overall sign, with the result (3.21) for the contributions of short multiplets in the discrete spectrum (a similar observation was made in [59, Equation (5.15)]). Interestingly, the error function in (4.17) also shows up with the same argument in the result for the helicity supertrace (A.15) computed in appendix A, and it ensures that the result is smooth as a function of λ, even at λ = 0 where the potential disappears. It is also worth noting that (4.18) vanishes at y = 1, however this is only so if this value is approached along the unit circle |y| = 1. If we allow ξ 2 to have a non-zero imaginary part, then the result (4.16) is in fact divergent at ξ 2 = 0, reflecting the logarithmic divergence of the integral (4.6) at that value. In fact, just as the imaginary part of ξ 1 is related to the coefficient λ of the scalar potential on Taub-NUT, one might expect that a non-zero value of ξ i 2 may have a similar effect of inducing a scalar potential, and change the classical dynamics of the system.
Let us now extract the index I + q by taking two derivatives with respect to ξ 2 before setting ξ 2 = 0 as in (3.22), i.e.
If we restrict ξ 2 to lie along the imaginary axis (ξ 2 = iξ i 2 ), we find This is precisely the function 4 a attr (τ 2 , u 2 ) in Equation (2.21), upon identifying m = 2R, u 2 = −mλ and = 2q. The overall factor of 4 is due to our choice of normalization, which was tailored to match the indices I ± in (3.18) in the limit where τ 2 → ∞. We note that other ways of treating the derivative d dξ 2 in (4.19) would give a different coefficient for the Gaussian term in (4.20). At the moment we do not have a physical justification for the prescription used above, which seems to be required for modularity.
Supersymmetric quantum mechanics with four supercharges
Here we briefly discuss the index in the supersymmetric quantum mechanics obtained by reducing the (0,4) sigma model on Taub-NUT space, which provides an alternative description of the quantum mechanics of two BPS black holes in N = 2 string vacua. The elliptic genus in this model was computed using the same localization techniques in [59, (6.11)]. Including the contribution of the left-moving fermions, we arrive at where ξ 1 couples to the U(1) charge conjugate to the tri-holomorphic isometry, and ξ 2 couples to a linear combination of Cartan generators for the rotational isometry and Rsymmetry. As before, ξ 2 must be kept non-zero in order for the integral to be well-defined. The analogous one-dimensional sigma model computation as described above leads to The Fourier expansion with respect to ξ r 1 can be computed using the same methods as in section 4.3. Upon identifying ξ i 1 = τ 2 λ as before, and taking the limit ξ 2 → 0 keeping ξ 2 purely imaginary we find i.e. precisely the same result (4.20) as in the model with 8 supercharges, up to an overall factor of − 1 4 . In particular, in contrast to the model studied in [32], the contribution from the continuum produces both a term proportional to the complementary error function, as well as a Gaussian term, which is in fact necessary for the modular invariance of the generating function of MSW invariants [27,28].
Discussion
In this paper we studied the supersymmetric quantum mechanics of a particle moving in Taub-NUT space M TN , as a model for the relative dynamics of two-black-hole bound states in N = 4 string theory. We analyzed this system both from a Hamiltonian viewpoint and by using localizing the functional integral. The spectrum of the theory consists of a discrete part, corresponding to bound states, as well as a continuum part, corresponding to scattering states. Our main goal was to compare the contribution of the continuum with the non-holomorphic completion required for modularity of the generating function of black hole degeneracies in the microscopic analysis.
We mainly focussed on the supersymmetric index I(τ 2 ; ξ 1 , ξ 2 , v) where the parameter τ 2 couples to the Hamiltonian, ξ 1 couples to the U(1) charge q under the triholomorphic isometry of M TN , ξ 2 to a combination of the Cartan generator of the SU(2) rotational isometry and an R-charge q 2 , and v to different R-charge q 3 in the supersymmetric quantum mechanics. The imaginary part of ξ 1 is proportional to the coefficient λ of the scalar potential which deforms the geodesic motion on M TN , while preserving all supersymmetries. Using the Hamiltonian formulation of the model, we computed the contribution of the discrete states to the above refined index, as well as to other indices and helicity supertraces. We recovered the same result using supersymmetric localization in the functional integral, along with contributions from the continuum of scattering states. The main result is summarized in Equations (4.15), (4.16). The discrete part of this result agrees with the Hamiltonian computation upon identifying Im(ξ 1 ) = τ 2 λ.
Upon computing the second Taylor coefficient in ξ 2 at v = 0, assuming the chemical potential ξ 2 to be purely imaginary, we found that I(τ 2 ; ξ 1 , ξ 2 , v) precisely reproduces JHEP12(2018)119 the Fourier coefficient a attr (τ 2 , u 2 ) in (2.21) appearing in the modular completion of the generating function (2.18) of the microscopic degeneracies -a generalization of the usual generating function (2.11) involving two elliptic parameters z, z. The parameter u 2 = Im( z)/τ 2 on the microscopic side is identified with λ, whereas the parameter u 2 = Im(z)/τ 2 must be taken in the attractor chamber in order to match the quantum mechanics result. The function a attr (τ 2 , u 2 ) encodes the modular completion of the original one-parameter generating function A 2,m (τ, z), in a subtle manner which combines the limits u 2 → 0 and | u 2 | → ∞ as discussed at the end of section 2.4.
Our analysis raises several puzzles and open questions. First, it would be interesting to have an independent computation of the continuum contribution to the refined index using Hamiltonian methods. In an appendix, we outline such a computation for the helicity supertrace, but it remains to extend this approach to the case of the refined index. Second, it would be useful to justify why the imaginary part of the chemical potential ξ 1 induces a scalar potential on Taub-NUT space, and whether the imaginary part of ξ 2 has a similar effect. Third, we have observed certain relations between the indices I ± = Tr(−1) F ± , the helicity supertrace I 2 and the second derivatives of I ± (y, v) with respect at y = 1, v = 0 at the level of the discrete state contributions, and it would be interesting to establish if these relations continue to hold beyond the limit β → ∞.
As for the comparison with the generating function of microscopic degeneracies of N = 4 dyon bound states, it is satisfying that the quantum mechanics produces the correct non-holomorphic completion term of the full three-variable Appell-Lerch sum (2.18), but it is puzzling that it matches the bound state contributions only in the attractor chamber u 2 = − /2m (albeit for all values of u 2 ). This is presumably due to the fact that we have not found a natural rôle for the chemical potential u 2 = Im(z)/τ 2 in the quantum mechanics. It would be interesting to understand the physical relevance of the threeparameter generating function defined in (2.19), and see whether a similar refinement exists for the generating function of single-centered N = 4 black holes. Another issue worth clarifying is the dependence of the result (4.20) on the direction of the derivative in Equation (4.19).
Finally, it is interesting to note that the quantum mechanics on Taub-NUT with 4 supercharges provides an alternative description of the dynamics of two-centered black holes in N = 2 string vacua, which is different from the one studied in [32,46,47]. In section 4.5 we computed the index using localization, and found that the result (4.23) contains both a term proportional to the complementary error function, also present in [32], as well as a Gaussian term, which is in fact necessary for the modular covariance of the generating function of MSW invariants [27,28]. It would be interesting to apply similar localization techniques to the case of multi-centered black holes, where mock modular forms of higher depth are expected to occur [30]. Interestingly, such modular objects arise in the computation of elliptic genera of squashed toric manifolds [54], and presumably also in the context of higher rank monopole moduli spaces, which may provide a useful model for the dynamics of multi-centered black holes.
(A.15) In the limit β → ∞, this reduces to the helicity supertrace (3.28), but is a continuous function of λ for any finite value of β (albeit not differentiable at λ = 0.) It is tempting to identify the contributions J ± (β) with the indices I ± (y, β) at y = 1. However, they differ from the localisation result (4.20), and there is no reason a priori to expect that the helicity supertrace I(β) should be related to the sum of the indices I ± (y, β), even though this appears to be the case for the contribution of the discrete spectrum.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 14,396.2 | 2018-12-01T00:00:00.000 | [
"Physics"
] |
The role of local-geometrical-orders on the growth of dynamic-length-scales in glass-forming liquids
The precise nature of complex structural relaxation as well as an explanation for the precipitous growth of relaxation time in cooling glass-forming liquids are essential to the understanding of vitrification of liquids. The dramatic increase of relaxation time is believed to be caused by the growth of one or more correlation lengths, which has received much attention recently. Here, we report a direct link between the growth of a specific local-geometrical-order and an increase of dynamic-length-scale as the atomic dynamics in metallic glass-forming liquids slow down. Although several types of local geometrical-orders are present in these metallic liquids, the growth of icosahedral ordering is found to be directly related to the increase of the dynamic-length-scale. This finding suggests an intriguing scenario that the transient icosahedral connectivity could be the origin of the dynamic-length-scale in metallic glass-forming liquids.
that it is of the order of interparticle distance and grows by a factor of 2 to 9 as the glass-forming liquids cool towards a mode-coupling critical temperature, T c 4,22,23 . Studies on polydispersed glass-forming liquids 24 and a two-diemnsional metallic liquids 25 have domenstrated that the static and dynamic are coupled 26,27 . However, some other studies have shown that the static length scale increases at a slower pace than the dynamic-length-scale in decreasing the temperature of glass-forming liquids 5,28 . The growth of local geometrical-orders (LGOs) is also argued to be the cause of the rapid rise in the relaxation time of cooling glass-forming liquids [29][30][31] . In three dimensional systems of monodispersed hard-spheres, the icosahedron is the most locally-preferred structure and increasing its number while cooling is believed to be linked directly to vitrification 32,33 . In multicomponent, polydispersed metallic systems, an increasing number of five-fold symmetry clusters is reported to be the reason for their better glass-forming ability (GFA) 34,35 . However, the role of specific LGO on the growth of dynamic-length-scale is not yet undersood, although numerous studies have shown rapid growth of dynamic-length-scale and LGOs on cooling glass-forming liquids towards the glass-transition temperature T g . In this article, we studied a ternary metallic glass-forming system to understand whether the growth of any specific LGO correlates with the increase of dynamic-length-scales by using quasielastic neutron scattering (QENS) and molecular dynamics (MD) simulation techniques.
Results
The Cu-Zr-Al is a well-known glass-forming metallic system with a distinct GFA 36 . For this study, we have chosen the following composition: (Cu 50 Zr 50 ) 100−x Al x (x = 2, 4, 8, and 10). The GFA of the system is found to increase with the addition of Al in Cu 50 Zr 50 36 . The QENS experiments were conducted on a time-of-flight neutron scattering instrument, Pelican, at Bragg Institute in Sydney, Australia 37 where f q is the Debye-Waller factor, τ α is the relaxation time, and β is the stretching exponent. The value of the stretching exponent was found to be β < 1 and temperature dependent, but composition-and Q-independent. At the lowest measured temperature, we obtained a value of β = 0.6 ± 01, and at the highest temperature β = 0.8 ± 0.1. The stretching of Φ(q, t) indicates the multiple relaxation processes and the presence of heterogeneous dynamics in CuZrAl liquids. As we mentioned earlier, the DH can be estimated by calculating the four-point correlation function χ 4 (t), but this quantity cannot be evaluated from a QENS experiment. However, recent theoretical advances have shown that the χ 4 (t) is related to the dynamic susceptibility, χ T (Q, t) by the fluctuation-dissipation theorem 38 . The χ T (Q, t) can be evaluated from the Φ(Q, t) which is readily obtained from QENS experiments. The dynamic susceptibilities were obtained by χ = ∂Φ ∂ Q t ( , ) T Q t T ( , ) . Figure 1b shows the χ T (Q, t) of the (Cu 50 Zr 50 ) 94 Al 4 liquid in a semi-logarithmic representation, which shows that the strength of DH increases with cooling. The strength of χ T (Q, t) indicates the extent of spatial correlation in the atomic motion 39 (Fig. 1b). The strength of DH in (Cu 50 Zr 50 ) 100−x Al x liquids is quite similar, but slightly increases with increasing concentration of Al. It has been proved that the onset of cooperative dynamics in metallic glass-forming melts begin at a temperature ~2T g , where T g is the calorimetric glass-transition temperature 40 . Therefore, in these high density liquids our experimental results confirm the presence of dynamic heterogeneities in (Cu 50 Zr 50 ) 100−x Al x well above the T g to the measured maximum temperature and the increase in the length-scale of correlated atomic motion in cooling the liquids.
The MD simulations were performed with a system of 100,000 atoms using the LAMMPS software and employing the potential developed by Sheng et al. 41 . To estimate the strength of DH and its temperature dependence, we first evaluated the self-part of the four-point correlation function 23 , 2 are overlap functions that are unity for − ≤ r r a 1 2 and 0 otherwise (Fig. 1c). We chose the distance parameter a = 1, which is the plateau value of the square of mean square displacement 23 . Although the absolute values of the strength of χ T (Q, t) and χ 4 (Q, t) obtained from the QENS and MD simulation cannot be compared, the growth rates with respect to the temperature are very similar (see Fig. 1b,c). Our results indicate that the relation between χ T (Q, t) and χ 4 (Q, t) proposed by Berthier and co-workers of polyhedrons in these metallic liquids at all temperature ranges 35 . However, the most abundant polyhedron in all four systems, above the melting temperature, was found to be <0,3,6,4> polyhedron, while the next abundant polyhedron was <0,2,8,2>. The population of these two polyhedrons increases with the Al concentration, while the growth of a specific polyhedron in cooling the liquids is dependent on the type of polyhedrons. The growth of the <0,3,6,4> polyhedron, which is most abundant in these liquids, is saturated in the supercooled liquid (see Fig. 2b). The next most abundant <0,2,8,2> polyhedron has grown almost linearly while cooling the liquids (see Fig. 2c). Interestingly, the icosahedra, <0,12,0,0> has grown much faster in the undercooled liquids (see Fig. 2d). In cooling these four alloy liquids, the growth of specific polyhedrons is similar, but the percentage of icosahedrons increased with the concentration of Al at any given temperature (Fig. 2d).
The dynamic-length-scale in these alloy liquids at various temperatures was obtained by the following procedures. First, we calculated the four-point, time-dependent structure factor for self-overlapping particles, S 4 (q, t), which is defined as, , and t is the time at which the maximum of dynamical four-point susceptibility χ 4 (t) 4,42,43 . The DH is transient in time, reaching a maximum value at around the structural relaxation time, which measures the degree of cooperativity of structural relaxation. Second, the S 4 obtained at low-q values at a given temperature were fitted with Ornstein-Zernike form (see Fig. 3b), where ξ 4 is the dynamic correlation length or dynamic-length-scale. In decreasing the temperature, the ξ 4 increases exponentially (see Fig. 3b). However, the growth rate of ξ 4 depends on alloy composition, with a higher amount of the Al content in the liquids a faster growth rate of ξ 4 was observed. In the (Cu 50 Zr 50 )Al 10 , which has the highest Al content, the ξ 4 increasing from 2 to 7.5, while in alloys with 2% Al, ξ 4 increases marginally (see Fig. 3b). As we compare the growth of ξ 4 with different polyhedrons, the growth rate shows two regimes for all types of polyhedron other than the icosahedron. The ξ 4 grew a little with respect to number of <0,3,6,4> or <0,2,8,2> polyhedrons in the melts but grew substantially in the undercooled states. Surprisingly, the growth of ξ 4 shows a direct correlation with the population of icosahedrons in these liquids. This relation holds good in these highly dense alloy liquids in a temperature range as low as 300 K below T m and as high as 300 K above T m .
Discussion
Our study experimentally confirmed the existence of the dynamic heterogeneity in the glass-forming (Cu 50 Zr 50 ) 100−x Al x liquids and the corresponding length scale of correlated motion over a temperature range of 600 K. This is visible from the stretching of intermediate scattering function and the strength of dynamic susceptibility, respectively. Both the dynamic heterogeneity and the dynamic length scale were found to increase with Al content and cooling of alloy liquids. At the same time, the atomic mobility was found to decrease, which indicates strong coupling between structural relaxation and the dynamic-length-scale. Using the MD simulation, we quantitatively determined the population of different polyhedrons as a function of temperature in these liquids. Although the population of the icosahedron is very little in high temperature liquids, it increases substantially in an undercooled state. The population of icosahedrons grew exponentially in the range of temperature studied. Similarly, the dynamic length scale evaluated from the four-point dynamic structure factor increases exponentially with a decreasing in temperature. However, the growth of other polyhedrons did not follow specific temperature dependence. In fact, the growth of most abundant <0,3,6,4> polyhedron saturated in undercooled liquids, and the <0,2,8,2> polyhedron grows almost linearly with temperature. This suggests a strong correlation between the increase of the dynamic-length-scale and growth of icosahedrons in liquids and the possible percolation of a string-like icosahedral medium-range order 35 . Additionally, as a function of Al content, the number of icosahedra increases and atomic dynamics slows down. This result suggests that the slowdown of the atomic dynamic in these liquids must have a strong coupling to the growth of icosahedron with a characteristic lengthscale changes as seen also in nonlinear dielectric studies 44 . Entire existing studies in which both static and dynamic-length-scale have been computed for the same glass-forming liquids show that these length scales do not correlated each other [4][5][6] . The value of static-length-scales calculated in various studies generally found to be smaller than the dynamic one and the difference increases with decrease in temperature 45,46 . Therefore, we did not attempt to calculate the static-length-scale in the (Cu 50 Zr 50 ) 100− x Al x liquids. However, dynamic-length-scales obtained in our study exhibit a power law behaviour with relaxation , where a is constant z is the dynamic critical exponent. The value of z varied from 4.37-10.06 (see Fig. 3f). Such a power law behaviour has been reported in many glass-forimg liquids 23,47 . It has also been shown that the icosahedral clusters have a strong tendency for connectivity and form a string-like network [48][49][50] . Therefore, we explored the possibility of a correlation between the increase of dynamic-length-scale and growth of icosahedra in these alloy liquids. We scaled the population of the icosahedron with dynamic-length-scale in the four alloy liquids investigated. Surprisingly, a linear relationship was observed in the alloy liquids (Fig. 3e) a Q range of 0.2 Å −1 -1.9 Å −1 and an energy resolution of 70 µeV for the experimental setup. Since the maximum of static structure factors of the alloy melts are around 2.8 Å −151 , the scattered signals from our experiments are mainly due to the incoherent scattering from the Cu atoms of the samples. Therefore, the dynamics that we observed from the QENS data are of the self-dynamics of Cu atoms in the respective alloy liquids. For the high temperature measurements, we used an ILL-design vacuum furnace. The QENS data were collected from high temperature down to the melting temperature of each alloy (1573 K, 1473 K, 1373 K, 1273 K, and 1223 K). The QENS data were collected over a duration of 4 hours at each temperature. An empty Al 2 O 3 sample holder was also measured at each temperature for background subtraction and self-absorption correction. For correcting the detector efficiency, a vanadium crucible with a similar sample geometry was also measured at room temperature for 4 hours.
QENS data analysis. The raw data was normalized to the vanadium data, and converted to the dynamic structure factor using LAMP (Large Array Manipulation Program). The self-absorption of the sample S in a container C was corrected using where S c and S S+C is the scattering from the container, and from both sample and container respectively.
is the correction factor for scattering and self-absorption of the container, is the correction factor for scattering due to the container and absorption in both sample and container and θ ω is the correction factor for scattering due to the sample and absorption in both sample and container. At last, the intermediate scattering functions Φ(Q, t) were obtained by Fourier transforming the dynamic structure factor. Molecular Dynamics Simulation. The molecular dynamics (MD) simulations were done for the same compositions measured in QENS experiments using Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS); a free software obtained from the Sandia National Laboratories, USA. The embedded-atom method (EAM) potential was used to describe the interatomic interactions. The time step used in the simulation was 2 fs and periodic boundary conditions were applied. The Nose-Hoover thermostat was used to control the temperature. For each system, the initial configuration containing 100,000 atoms was equilibrated at 2000 K for 5 ns followed by rapid quenching to the desired temperatures with the rate of 2 × 10 11 Ks −1 in NPT ensemble. The volume of the system was adjusted to give zero pressure during cooling. Before taking the structural configurations, the systems were relaxed for extra 1 ns by the NVT ensemble. Voronoi tessellation calculates the polyhedral cells which have planar faces and completely fill the space by constructing bisecting planes along the lines connecting the target atom and its neighbors. The polyhedrons surrounding the central atom are described by the Voronoi index <n 3 ,n 4 ,n 5 ,n 6 …>, where n i is the number of i-edged faces of the polyhedron. A cutoff distance of 5 Å was used in this study. | 3,337.6 | 2017-06-13T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Free-Form Lenses: Why My Patient is Not Wearing My Prescription?
Ophthalmic lenses compensate for refractive errors, but also produce aberrations. Seidel [1], in 1955 described 5 types of monochromatic aberrations affecting standard imaging systems as cameras, telescopes and microscopes: spherical aberration, coma, curvature of field, distortion and oblique astigmatism [1]. However, an ophthalmic lens has particular properties that require a slightly different approach. Instead of being used at full field, the ophthalmic lens is scanned by the eye behind it. For any gaze direction, only a small portion of the lens is used to create the foveal image. This portion approximately has the size of the pupil and it is located at the point where the visual axis crosses the lens. The consequence is that Seidel aberrations affect lens performance differently to the way they would do in standard imaging systems. In particular, curvature of field and oblique astigmatism produce the joint oblique aberrations, causing the lens to deliver a different than expected power when viewing through the lens at oblique directions. Oblique aberrations appear when rays of light from an object go through the lens at an oblique angle, for example, when a patient is looking away from the optical center using the peripheral area of the lens or when the lenses are not fitted completely perpendicularly to the viewing direction of the patient. In these situations, the lens is effectively providing a prescription with incorrect values of sphere, cylinder and axis, causing blurry vision and loss of visual field. Conventional production of lenses is based on the usage of a small number of different molded blanks with specific power characteristics on the frontal surface of the lens and generating the wearer’s prescription on the back side by shaping a sphere or torus. Using molded blanks with spherical geometry, it is possible to either reduce the astigmatic or the spherical component of the oblique aberration by selection of the best-form base curve (which is the name we use for the frontal surface) according to Tscherning’s theory. In other words, given the lens refractive index and the user prescription, there is a value for the dioptric power of the frontal surface of the lens that minimizes the oblique aberrations. For example, according to Tscherning’s theory, a lens with a prescription of +2D should be produced with a base curve of 7.50D in order to minimize oblique astigmatism [2,3].
Introduction
Ophthalmic lenses compensate for refractive errors, but also produce aberrations. Seidel [1], in 1955 described 5 types of monochromatic aberrations affecting standard imaging systems as cameras, telescopes and microscopes: spherical aberration, coma, curvature of field, distortion and oblique astigmatism [1]. However, an ophthalmic lens has particular properties that require a slightly different approach. Instead of being used at full field, the ophthalmic lens is scanned by the eye behind it. For any gaze direction, only a small portion of the lens is used to create the foveal image. This portion approximately has the size of the pupil and it is located at the point where the visual axis crosses the lens. The consequence is that Seidel aberrations affect lens performance differently to the way they would do in standard imaging systems. In particular, curvature of field and oblique astigmatism produce the joint oblique aberrations, causing the lens to deliver a different than expected power when viewing through the lens at oblique directions. Oblique aberrations appear when rays of light from an object go through the lens at an oblique angle, for example, when a patient is looking away from the optical center using the peripheral area of the lens or when the lenses are not fitted completely perpendicularly to the viewing direction of the patient. In these situations, the lens is effectively providing a prescription with incorrect values of sphere, cylinder and axis, causing blurry vision and loss of visual field.
Conventional production of lenses is based on the usage of a small number of different molded blanks with specific power characteristics on the frontal surface of the lens and generating the wearer's prescription on the back side by shaping a sphere or torus. Using molded blanks with spherical geometry, it is possible to either reduce the astigmatic or the spherical component of the oblique aberration by selection of the best-form base curve (which is the name we use for the frontal surface) according to Tscherning's theory. In other words, given the lens refractive index and the user prescription, there is a value for the dioptric power of the frontal surface of the lens that minimizes the oblique aberrations. For example, according to Tscherning's theory, a lens with a prescription of +2D should be produced with a base curve of 7.50D in order to minimize oblique astigmatism [2,3].
However, in practice, this selection of base curve has some limitations. First, the recommended base curve, according to Tscherning's theory, is too curved and leads to thicker and heavier lenses and, in consequence, less ergonomic glasses with a worse aesthetic appearance. Secondly, the best form lenses require labs to keep a wide stock of molded blanks with base curves adapted for each prescription which increases the cost of the manufacturing process. In order to avoid some of the limitations of spherical base curves, it is possible to use a spherical base curves that allow the production of flatter and thinner lenses with an acceptable reduction of oblique aberrations. However, there are still important limitations because the conventional production does not allow different asphericity levels as required by the prescription. In addition, lens tilting by pantoscopic or wrapping angles as well as grinded prisms by decentration is not recommended in aspheric lenses. So, the use of aspherical molded blanks is basically helpful in improving the aesthetics of the lens.
In recent years, the way in which the lenses are manufactured has changed significantly through the introduction of free-form technology. In comparison with the conventional manufacturing processes that only allow the generation of a sphere or torus on the back surface of the lens, free-form technology allows the production of arbitrary surfaces. So, the combination of spherical molded blanks with a back surface produced point by point allows a much better compensation of the oblique aberrations. Further, it is possible to compensate the oblique aberrations according to the tilt of the lens. In other words, it is possible to optimize the lens for all gazes, according to the visual requirements of each patient and the specification of the frame shape and tilts, that is, to provide a full customization of the lens design. Thanks to sophisticated optical design software, it is possible to calculate the unwanted sphere and cylinder power errors (oblique aberrations) that decrease the visual quality for patients. 0ptimization algorithms will optimize the back lens surface to compensate for oblique aberrations taking into consideration all the factors involved: lens refractive index, prescription, base curve, frame characteristics and the position of wear of each patient (interpupilary distance, fitting height, back vertex distance, pantoscopic angle, wrapping angle or near working distance). The result is a user-compensated lens (SV, bifocal or PAL) with an individual back surface customized for the wearing position of each patient.
So, how does a customized lens perform when measured in a lensometer? It is important to note that the lensometer measures the power of ray beams perpendicular to the back surface of the lens. This classic configuration matches the power perceived by the user only for central vision and when the lens is fitted with no tilts. Due to the compensation of oblique aberrations for all gaze directions, the lens may present varying local differences in power in relation to the original prescription. When checking the lens with the lensometer, powers can be altered at the reference points: the optical center for SV lenses or the distance and near reference points for PALs. In addition, in the case of SV lenses, where constant power values over the whole lens would be expected according to the conventional manufacturing process, we can measure significant power differences at different points on the lens. However, it is important to consider that these variations of power ensure that the patient has the best prescription when looking through the different areas of the lens, improving the visual, even in its peripheral areas.
Discussion
The effect of oblique aberrations and their compensation in the user's visual performance can be analyzed from a theoretical point of view or by wearer trials that evaluate the satisfaction rates of the final user. A theoretical analysis can be done by modelling both the lens-eye and the lens-lensometer systems by means of exact ray tracing. A dense grid of points is created on the lens and the power at each point computed for both systems. The results can be compared by the use of sphere and cylinder maps showing the power measured by the lensometer and the power perceived by the user. For example, in Figure 1 a theoretical comparative analysis is presented of a user compensated SV free-form lens and the corresponding conventional SV lens. The lens has a refractive power of +6.50D, produced with a base curve of +8.00D and fitted in a frame with a wrapping angle of 15º, pantoscopic angle of 8º and back vertex distance of 14mm. As expected, the lensometer power of the conventional lens is constant (both sphere and cylinder) all over the surface of the lens. However, the cylinder of the user perceived power grows above 2 D at both the nasal and temporal sides as a consequence of the strong oblique aberrations produced by this combination of power, base curve and tilts. The area without significant aberration is reduced to two points in the central area of the lens. On the other hand, for a free-form SV lens compensated according to the individual personalization data of the patient, the spherical and cylinder power change point by point all over the surface of the lens. Measured power in the optical center is +6.42sph, -0.29cyl x 114º. However, the aberration perceived by the user is largely reduced, providing a much wider and clear field of view and thus, more comfort. We should stress here that "user-perceived power" does not mean a subjective perception of optical power, but the actual vergence of the light beams entering the eye for any gaze direction when the lens is positioned "as worn". So any user-perceived power different than the actual prescription will automatically yield a reduction of visual acuity.
In addition to the theoretical analysis of the user perception, different randomized double-masked wearer trials have been carried out by our research group in order to compare the visual performance of conventional lenses (without compensation of oblique aberrations) and user compensated free-form lenses (with compensation of oblique aberrations). The performance of user compensated SV lenses was tested in 22 subjects aged between18 and 40 years. Refractive errors were between +4.00 and -8.00D with astigmatism lower than 2.00D. Patients were asked to wear 2 different pairs of glasses over a period of time and select their preferred choice. Tested lenses were: 1) A conventional sphero cylindrical SV lens using a standard base curve selection and 2) a free-form SV customized for each patient and produced with the flatter possible base curve. Results showed a clear preference for free-form lenses: 68% of patients selected free form SV lenses, 27% selected the conventional SV lens and 5% of the patients did not perceived differences between both designs [4].
Secondly, the satisfaction of user compensated PAL was tested in 30 presbyopic individuals aged between 45 and 65 years with refractive errors between +4.00 and -6.00D with astigmatism lower than 2.50D. Patients wore 2 PAL designs developed ad-hoc for the study with the same power distribution but different optimization methods for the compensation of oblique aberrations as follows: A) Lens optimized with a merit function that tries to match lensometer power to the prescription and B) Lens optimized according to the user personal parameters with a merit function that tries to match user-perceived power to the prescription. After using each design for 7 days, patients were asked to select their preferred design. Results showed that 63% of subjects preferred the user compensated lens, 20% preferred the conventional lens and 17% did not perceived differences, indicating a clear preference for user compensated lenses [5,6] (FIgure 2).
Conclusion
New manufacturing technologies of lenses have allowed new calculation methods that compute and compensate the unwanted sphere and cylinder power errors (oblique aberrations) that decrease the visual quality of the patients. Consequently, the measured power in the lensometer of user compensated lenses varies at each point of the lens and can be significantly different to the original prescription. Theoretical and clinical analyses have demonstrated the efficacy of the compensation of the oblique aberrations to improve the visual quality and satisfaction of patients. | 2,984.8 | 2017-02-27T00:00:00.000 | [
"Physics"
] |
Molecular dynamics calculations of stability and phase transformation of TiV alloy under uniaxial tensile test
In this paper, molecular dynamics (MD) simulation software LAMMPS is used to simulate the elastic properties and stability of Ti-V single-crystal alloys. The relationship between the elastic constant and the mechanical stability of Ti-V alloy with a body-centered cubic (BCC) structure is studied. The energy relationship between TiV alloys with hexagonal close-packed (HCP) structure and BCC structure are compared, respectively. The effects of temperature, crystal orientations, and V content on the mechanical properties of TiV alloys are calculated under uniaxial tensile test. The results show that both ultimate tensile strength and plasticity of the Ti-V alloy with BCC structure decrease with the increase of temperature and V content, due to the phase transition from the BCC structure to the face-centered cubic (FCC) structure. Finally, it is identified that the modes of the transformation from BCC structure to FCC structure during the tensile process are BCC(100)//FCC(110), BCC(010)//FCC(1 1¯ 0).
Introduction
Titanium and its alloys are widely used in high-temperature heating building materials. The Ti-V alloy with a BCC structure has been widely used in the development of high-temperature structural components in nuclear reactors and automotive industries and aerospace [1]. In principle, pure BCC titanium may be a good choice for high temperature applications, but it only exists at high temperatures above 1155 K, and it is dynamically unstable at low temperatures. Alloying with vanadium, molybdenum, iron, manganese, niobium and other elements can stabilize the high-temperature BCC phase of titanium at a temperature below 1155 K. As a model system, the BCC titanium-vanadium alloy is dynamic and thermodynamically stable over the entire concentration range at high temperatures, but cannot exist stably at low temperatures [2]. Therefore, the research on BCC Ti-V alloy, especially the analysis of its thermodynamic and mechanical properties has aroused great interest.
The reported experimental Ti-V phase diagrams can be divided into two groups: with and without the isostructural phase separation line for the BCC solid solutions, respectively. Murray et al and Nakano et al pointed out that there is a reaction at 948 K: β-Ti,V=α-Ti+V [1]. However, Wei and Fowler [3] believed that this reaction was caused by oxygen impurities and pointed out that there is no stable monotectoid reaction was likely to exist in this system. Nowadays, the Ti-V phase diagram without monotectic reaction is the widely accepted diagram [4].
At present, most of the development of new materials is based on labor-intensive scientific research and expensive experimental research. In recent years, the most important and difficult task of condensed matter physics is to reduce labor intensity and reduce experimental expenses. First-principles methods serve as a Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
powerful tool for generation of reliable data on thermodynamic properties [1,[5][6][7]. However, quantum mechanics cannot be used in isolation because it needs to consider the effect of electrons' behavior [8]. For this the reason, multi-scale simulation calculation based on the calculation parameters of quantum mechanics is a very promising large-scale material calculation method such as the Deep MD potential function (Deep learning package for many-body potential energy representation and molecular dynamics )developed by machine learning and the semi-empirical (Second Nearest-Neighbor Modified Embedded-Atom Method Potential)2NN MEAM potential function [9,10]. The molecular dynamics software LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator ) adopts the relevant parameters of quantum mechanics calculations and then combines Newton's second law to obtain large-scale calculation software, which has broad application space in materials, biomedicine, and chemical engineering [11].
Many scholars have studied the dynamic instability of titanium and the performance of the Ti-V system using first principle calculation methods. However, it is extremely difficult to calculate the instability of the BCC-Ti phase using the first principle calculation methods like density functional theory [12]. It is more accurate to determine the mechanical stability of the Ti-V system based on the elastic constant calculated using the first principle calculation method [13]. It is shown that the instability is relatively strong and C′ (tetragonal shear modulus) is quite negative for BCC-Ti metal, However, alloying with vanadium rapidly increases C′ and reduces the strength of the instability, it is in accordance with the behavior of the bcc phase line in the Ti-V alloy.
In this paper, the elastic properties, mechanical instability, and phase transformation of the Ti-V system under tensile loading are calculated based on the molecular dynamics software LAMMPS. First, the lattice constant and elastic constant of the BCC-TiV system are calculated. Then the relationship between the 3D representation surface of Young's modulus and phase stability is investigated in detail. Finally, the uniaxial tensile test simulation on the BCC-TiV single crystal was conducted for exploring the influence of the tensile deformation process on its mechanical properties under the effect of crystal orientation and temperature.
MD simulation process
MD simulation is performed using LAMMPS [11]. The simulations employ the modified embedded atom method (MEAM) potential developed by Maisel SB [14].
Since the different solid solution ratio of V in Ti directly affects the lattice constant in BCC-TiV, it is necessary to calculate the lattice constants for different BCC-TiV systems, and then calculate the elastic constants and perform tensile tests. The calculation process is shown in figure 1.
Calculation of cohesive energy of Ti-V alloy
Owing to vanadium is a β-titanium (BCC structure) alloy stabilizing element, and the content of vanadium has a great influence on the stability of the Ti-V system, it is necessary to establish a HCP structure and a BCC structure respectively when calculating the cohesive energy of the Ti-V system. In the calculation of the HCP structure, the LAMMPS is applied to set a circulant matrix with a-axis and c/a-axis as parameters with consideration of the different substitution ratios of V to obtain the lattice constant a from 2.9 to 3.0, c/a from 1.5 to 1.7. The atomic ratio of vanadium is 1%, 10%, 30%, 50%, 70%, which correspond to the average cohesive energy of each atom.
The same method is also used to calculate the corresponding relationship between the lattice and the cohesive energy of the BCC Ti-V system.
Elastic constant and representation surface of Young's modulus
The elastic constant is a value calculated based on Hooke's law. Ordinary crystals need to calculate 21 independent elastic tensors [15]. For cubic crystal, there are three independent parameters: C 11 , C 12 , and C 44 . Therefore, the general form of Hooke's law can be reduced to formula (1): They are connected to the tetragonal shear modulus ¢ C : The elastic relations (Hooke's law) between the strain (ε) and stress (σ)matrices are mediated by the elastic compliance (S) or the elastic stiffness (C) matrices: From the elastic equations, the relations between the compliance matrix and the stiffness matrix are The elastic matrix for body-centered cubic has the form: The relations between the elastic stiffness constants C ij and the elements of the compliance matrix S ij are found from equation (5) [16]: The elastic properties of single crystals are completely determined by the elastic stiffness matrices and the elastic compliance matrices. In reality, polycrystalline materials are considered more often than single-crystal. The elastic properties of polycrystalline materials are determined by two independent elastic moduli: the shear modulus (G) and the bulk modulus (B).
The mechanical properties of polycrystalline materials are approximated in the Voigt-Reuß-Hill approach [17], where the bulk and shear moduli are given by arithmetic averages: 2 3 19 3 3 10 15 7 11 12 44 2 44 2 44 The bulk modulus B of a material characterizes its resistance to compressibility, whereas the shear modulus G characterizes its resistance to plastic deformations [18]. Young's modulus E also relates the bulk and shear moduli: Unless all the elastic modulus C ij must be positive, otherwise the mechanical stability is still open [16]. Born et al developed a theory on the stability of crystal lattices [19]. For Body-centered cubic crystals, the five elastic stability criteria are given by: , 0 0 9 11 12 11 12 44 11 12 The representation surface of Young's modulus for Body-centered cubic system is given by [20]: where l 1 , l 2 , l 3 are the three physical dimensions of space-length, width, and height in the 3D Cartesian coordinate system, respectively. Studies have shown that the mechanical stability of β-Ti alloys with low elastic modulus can be determined based on the size of criterion E 001 [21]. E 001 is the Young's modulus of elasticity of the BCC crystal in the 〈001〉 crystal direction and its relationship with the elastic constant is shown in formula (11). When E 001 is positive, its structure can exist stably. The anisotropy of single crystal BCC-TiV Young's modulus is so prominent that it is necessary to study the anisotropy of Young's modulus of single-crystal.
Modeling and analysis method of tensile deformation process
The BCC-TiV supercell is constructed to model the BCC structure, and part of Ti atoms are replaced using V atoms for the modeling (60 Å×60 Å×120 Å), and then the NPT (isothermal-isobaric) is performed for relaxation of the structure until the system stabilizes at a specific temperature. After the relaxation, both the uppermost atom (thickness of 10 Å) and the lowermost atom (thickness of 10 Å) of the structure are fixed. Periodic boundary conditions in the x and y directions and non-periodic boundary conditions in the z direction are applied, respectively. The temperature of the system is controlled under the NVT canonical ensemble and the stretching command is executed. And then the atomic stretching process is visualized through OVITO [22].
The V atom substitution model in Ti-V system is shown in figure 2.
Results and discussion
During the MD simulation process, the relationship between the solid solution ratios of V and lattice constant, mechanical strength, phase stability of α-Ti of HCP structure and β-Ti of BCC structure were calculated.
Cohesive energy
The calculation result of the cohesive energy with the lattice constant is shown in figure 3(a), from which the lattice constant value corresponding to the lowest cohesive energy can be obtained. The correlation of the solid solution ratios of V in β-Ti with the change of the lattice constant can be obtained from figure 3(b). It can be seen from the figure that the lattice constant decreases from 3.23 Å to 2.99 Å (V) as the content of V in β-Ti increases. As can be seen the difference in the cohesive energy of the TiV alloy system of the HCP structure and the BCC structure in figure 3(c), the average atomic energy of the BCC structure is lower than that of the HCP structure after the V content exceeds 1%, therefore the TiV system with BCC structure is more stable than that with HCP structure as the V content increases. It is known that BCC-TiV cannot exist stably when the V content is 1% [23].
It can also be seen from figure 3(b) that the lattice constant corresponding to the lowest cohesive energy of BCC-TiV has a linear relationship with the V contents and gradually approaches the lattice constant of pure vanadium. Figure 4(a) shows the dependence of elastic constants C 11 , C 12 , C 44 on V concentration in BCC-TiV alloy. It can be seen from figure 4(a) that C 11 increases monotonously, C 12 fluctuates from 100 GPa to 120 GPa, and C 44 has a trend of increase firstly and then decrease, which is consistent with [14]. The V content increased from 10% to 90%, and C 11 increased from 80.9 GPa to 227.49 GPa, which is consistent with the [21,24]. The obtained elastic results for C 11 show good quantitative agreement with the experimental data [21]. The fluctuation of the elastic constant of C 12 within a certain range is due to the instability of BCC-TiV as previously report shown [14]. In the calculated result, C 44 increases firstly and then decreases, which is slightly larger than the experimental value, but overall, it is within a reasonable range. The stability of the BCC structure can be judged using its elastic constant C′(or E 001 ). When the elastic constant C′ is greater than zero, its structure is relatively stable, otherwise its structure is not stable. The dependence of the elastic constant C′ on V concentration is shown in figure 4(b). It can be seen from the figure that the calculated elastic constant C′ gradually increases with the increase of V content, and it exceeds zero by about 15%-20%, which is consistent with the PAW-SQS of [5]. From the previous experimental and calculational results, it is known that when the V content in β-Ti is less than 20%, the elastic constant C′ is less than zero, and this result is consistent with other papers [13,24,25]. This also shows that the calculation model is more suitable for predicting the phase stability of the Ti-V system. In addition, C 12 and C 11 do not have any exclusive physical basis; in other words, no phonon mode directly corresponds to these constants [16]. When vanadium is added, the number of valence electrons increases and the Fermi level shifts toward the higher energy practically without changing the shape of the band and provides atomic bonds. This effect leads to an increase in the elastic constant C 11 and, as a result, to the mechanical stabilization of the alloy [5]. C 12 is fairly constant over a large concentration region, which is related to the elastic instability of pure BCC titanium [14]. As can be seen from figure 4(c), the number of Young's modulus of elasticity is increased when vanadium is added. These parameters are of great significance for the development of titanium alloys with low elastic modulus. The bulk modulus B of a material characterizes its resistance to fracture, whereas the shear modulus G characterizes its resistance to plastic deformations as shown in figure 4(d). Figure 5 shows the spatial distribution of Young's moduli E(r) of BCC-TiV single crystal with a V content of 50%, from which it can be seen that it has a relationship with the crystal orientation. Young's modulus of elasticity is the smallest in the 〈001〉 direction and the largest in the 〈111〉 crystal direction. Figure 6 shows the spatial distribution of Young's moduli E(r) with V content ranging from 20% to 70% in BCC-TiV alloy. It is easy to obtain that Young's modulus of elasticity increases in all crystal directions with the increase of V content. However, the increment of Young's modulus of elasticity in each crystal direction is different. With the increase of V content, both strength and mechanical stability of the BCC-TiV alloy system are enhanced, and the threedimensional Young's elastic constant of the single crystal tends to be smooth. It can be seen from figure 6 that Young's modulus of β-TiV in each crystal orientation increases with the increase of V content, and the 〈110〉 crystal orientation in the (010) plane increases compared to the 〈001〉 crystal orientation. The content of V in β-TiV increases from 20% to 70%, and Young's modulus in the 〈001〉 direction increases from 4.1 GPa to 111 GPa. Since the elastic constant in the Ti-V system is highly dependent on the crystal orientation and V content, it will affect the strength and plasticity.
Uniaxial tensile test simulation
The metal with conventional BCC structure exhibits low temperature brittleness, that is, it exhibits better plasticity at high temperature while brittle fracture at low temperature. This is related to the screw dislocation movement activation of BCC metal. Screw dislocations are affected by interstitial elements, metal bond types and exhibit brittleness at low temperatures [26]. It can be seen from figure 7 that as the temperature increases, the strength and elongation of BCC-TiV decrease. Figure 8(a) displays a diagram of the atomic evolution process of BCC-TiV with 20% V content stretched along the 〈001〉 direction at 1 K and 300 K. The tensile yield strength of 25 GPa is obtained with stretching system temperature of 1 K (the stretching process avoids the influence of thermal activation energy). When the number of stretching steps reached 400,000 steps, that is, the deformation was about 40%, the system also showed yielding, and no crack source was found. In the stretching process, Moreover, there is a phase transformation process that the BCC structure transforms into FCC structure. When the tensile temperature is 300 K, Ti-20%V shows yield at about 200,000 steps, and only part of the BCC structure is transformed into a face-centered cubic structure before yielding. Comparing the atomic cloud diagram of Ti-20%V tensile deformation with different crystal orientations as shown in figure 8(b), 〈111〉 crystal orientation shows the largest Young's elastic modulus. The stretching of the 〈111〉 crystal orientation and the 〈110〉 crystal orientation does not show the phase transformation process, i.e., the change process of BCC structure to FCC structure. Figure 9(a) shows the stress-step curves of BCC-TiV single crystal alloy with different solid solution ratios V ranging from 20% to 60% in the [001] crystal direction.
As the V content increases, Young's modulus of single crystal increases due to the solid solution strengthening, while the strength decreases and the plasticity deteriorates. In order to explain it, the atomic evolution process of the tensile process of 30% and 40% vanadium content of BCC-TiV alloy are calculated, as shown in figure 9(b). When the vanadium content in BCC-TiV alloy is 30%, the phase changes from BCC structure to FCC structure before 200,000 steps, and a HCP structure appears after 200,000 steps. The appearance of HCP structure may be related to the formation of crack source. The transformation from BCC structure to FCC structure is the result of a combination of elastic and plastic deformation during the stretching process. When the vanadium content in BCC-TiV alloy reaches 40%, the transition rate of the material from the BCC structure to the FCC structure is lower due to the increased phase transition stability. The tensile curve shows that the structural transformation is relatively stable, and the ratio of the FCC structure determines the decrease in plasticity and strength of the material.
A unit cell stretch diagram (figure 10) is adopted for analyzing the mechanism of conversion of BCC structure to FCC structure. The coordination number of BCC is 8, and the closest distance to the central atom is r=√3/2a (a is the lattice constant). Because the BCC structure is transformed into the FCC structure during the stretching process, it is a process of elastic or plastic deformation. According to the stretching deformation process, when stretching in the Z direction, the other two dimensions remain unchanged. Therefore the formula: V BCC is the volume of BCC-TiV before stretching, and V FCC is the volume of the system after the BCC structure is converted to the FCC structure. The coordination number of the atom of the FCC structure is 12 and its coordination distance is r′.
Combining the above formula, it can be obtained that the z-direction stretches by 1.414 times. This also means that without temperature influence the material can realize the conversion of BCC to FCC in about 375,000 steps. When the vanadium content is 20% and the temperature is 1 K, 67.8% of the BCC structure transforms to FCC structure at 260000 steps. This is due to vanadium being a stable element of β-Ti.
As shown in figure 11, the z-axis direction does not change, and the x-axis and y-axis directions are rotated 45°clockwise, which is the crystal orientation of the FCC crystal convert by BCC, and the crystal plane changes to BCC(100)//FCC(110), BCC(010)//FCC(-110). The lattice constant is 1.414 times the original lattice constant. This phenomenon has also been confirmed in single crystal stretched BCC-Ta materials [22].
The above analysis shows that the BCC-TiV alloy system will undergo a BCC vertical direction FCC phase transformation process in the stretching process along the 〈001〉 direction. This phenomenon is caused by lattice deformation. Increasing the the whole system temperature and the β-Ti (BCC) stabile element V will hinder the phase transformation from BCC structure to the FCC structure. After tensile fracture occurs, the FCC structure will return to BCC structure, which indicates that the transformation of the BCC structure into a FCC structure is an elastic deformation process. Since the BCC-Ti stabile element V will hinder this phase transformation, the calculation process without the elastic deformation process may cause a misjudgment of the short-range order of the Ti-V system.
Conclusion
In this paper, the molecular dynamics simulation software LAMMPS is used to calculate the cohesive energy of the HCP structure and the BCC structure of the TiV alloy system. The conclusions for the tensile fracture deformation calculation of the TiV system at different crystal orientations and temperatures as below: (1) The Young's elastic constant C 11 of BCC-TiV single crystal alloy shows an increasing trend with the increase of V content, C 44 fluctuates within a certain range, and the solid solution ratio of V in BCC-TiV alloy exceeds 15%, BCC-TiV can exist stably.
(2) BCC-TiV single crystal alloy will show the phase transformation deformation from BCC structure to FCC structure during the tensile deformation along the 〈001〉 crystal orientation. With the increase of V content, the tensile strength and the elongation decrease. Both tensile strength and plasticity are decreased with increase of temperature. This phenomenon is mainly caused by the increase of V content, which hinders the transformation from BCC to FCC structure.
(3) The deformation process from the BCC to the FCC structure during the stretching process of the BCC-TiV single crystal alloy is mainly caused by the elastic deformation of the crystal lattice. Phase transformation occurs throughout the entire tensile deformation process from the BCC structure to the FCC structure. The transition mode is BCC(100)//FCC(110), BCC(010)//FCC (110).
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors. | 5,111.4 | 2021-01-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
The First Video Witness of Coastal Boulder Displacements Recorded during the Impact of Medicane “Zorbas” on Southeastern Sicily
Over the last few years, several authors have presented contrasting models to describe the response of boulders to extreme waves, but the absence of direct observation of movements has hindered the evaluation of these models. The recent development of online video-sharing platforms in coastal settings has provided the opportunity to monitor the evolution of rocky coastlines during storm events. In September 2018, a surveillance camera of the Marine Protected Area of Plemmirio recorded the movement of several boulders along the coast of Maddalena Peninsula (Siracusa, Southeastern Sicily) during the landfall of the Mediterranean tropical-like cyclone (Medicane) Zorbas. Unmanned autonomous vehicle (UAV) photogrammetric and terrestrial laser scanner (TLS) surveys were performed to reconstruct immersive virtual scenarios to geometrically analyze the boulder displacements recorded in the video. Analyses highlighted that the displacements occurred when the boulders were submerged as a result of the impact of multiple small waves rather than due to a single large wave. Comparison between flow velocities obtained by videos and calculated through relationships showed a strong overestimation of the models, suggesting that values of flow density and lift coefficient used in literature are underestimated.
Introduction
Mediterranean hurricanes, also called Mediterranean Tropical Like Cyclones (TLCs) or medicanes, are warm-core cyclones that develop over the Mediterranean Sea [1][2][3] with characteristics similar to tropical cyclones. Such storms are constituted by rotating cloud systems characterized by gale force winds, severe precipitation, and a low pressure center, accompanied by a spiral pattern of thunderstorms [4][5][6][7]. Two specific areas appear to be the favored locations for medicane genesis: the western Mediterranean [2,8] and the central Mediterranean-Ionian Sea [9,10]. During the last decades, the impacts of medicanes along the coasts of Mediterranean basin have strongly influenced the human settlements, causing a lot of damage and casualties [6,11]. Moreover, several authors predict that, in the next future, climate changes could modify medicanes, decreasing the frequency of their occurrence but increasing the strength of their impacts [4,12,13]. In the last 10 years, two different medicanes made landfall along the coast of Southeastern Sicily; the first one occurred in 2014 and was called Qendresa, and the second occurred in 2018 and was called Zorbas.
In September 2018, Medicane Zorbas impacted the Ionian coasts of Southeastern Sicily and was registered along the coasts of Apulia, Basilicata, and Calabria with minor energy [14]. Evidence of this storm event was observed along the rocky coast of the Maddalena Peninsula (Siracusa, southeastern Sicily; Figure 1), where a surveillance camera of the Marine Protected Area of Plemmirio recorded several boulder displacements that occurred inside an ancient Greek quarry. This coastal sector is characterized by the presence of important boulder fields, interpreted as evidence of the impact of severe storms and tsunami events in the past [15,16]. Since 2009, the ancient quarry has been intensely monitored, using several survey techniques, to identify and analyze boulder displacements with the main purpose to verify if storm waves could be responsible for the movement of boulders which have been attributed to tsunami events [15].
A determination if storm or tsunami waves were responsible for the boulder displacements on coastal area has been the object of considerable debate. Several studies agree that severe storms are generally able to displace most of the small boulders found along coastlines all over the world [17][18][19][20][21][22]. Some authors proposed that, although boulder displacements occur both during storm and tsunami events, the main cause for movements of the biggest boulders are probably tsunami [15,23]. In contrast, other studies ascribe storms the capability to detach and transport the boulders [24]. A new debate has been recently opened on the number of tsunami events reconstructed for the Mediterranean Sea. Marriner et al. [25] consider this number strongly overestimated, contesting the attribution to tsunami of most of the field evidences described in the literature, and suggesting that cyclical periods of increased storminess, driven by late Holocene climate changes, would be responsible for them. Vött et al. [26] replied to this theory disputing the tsunami DB and the statistical analyses used by Marriner et al. [25], confirming the reliability of the literature in the definition of tsunami events occurred in the Mediterranean Sea.
The Mediterranean basin, due to the lack extreme events such as cyclones, has been considered an excellent area for studies aimed to describe tsunami events [27]. In the absence of observatories, the analytical approach for the study boulder displacements has been the application of different hydrodynamic models (e.g., references [28][29][30][31][32]) to estimate the main features of the wave responsible for the dislocation (Typology, Wave Height, Wave flow velocity). To apply these models it is fundamental to acquire high-resolution data of the boulders (size, volume and mass), but it is also very important to reconstruct (i) the scenario in which the movement occurred (joint bounded or subaerial/submerged); (ii) the typology of movement (sliding, rolling/overturning, saltation/lifting); and (iii) the entity of displacement. These parameters are often very complicated to deduct by field evidence [33,34]. Up to the present, all the studies started from the assumption that the different position of a boulder, after and before an extreme marine event, represents the effect of a single wave impact [20,30,35]. For these reasons, until now, it has been very difficult to test the reliability of the hydrodynamic models in the description of the natural processes.
The aim of this work is to overcome this problem through the analyses of video recorded along the Maddalena Peninsula. Given that video witness has never been used before to analyze boulder displacements, we defined a methodology to reconstruct immersive scenarios, useful for georeferencing the images recorded in the video and to accurately measure, from them, several important parameters such as the wave heights at time of impacts and the wave flows at time of displacements. This kind of approach let us accurately test hydrodynamic equations, showing a significant overestimation of them, probably due to an underestimation of important parameter, such as seawater flow density and lift coefficient [31,32,36,37].
Furthermore, the work provides evidence about the dynamics of boulder movements, suggesting that single wave impacts are unlikely to be responsible for displacements, which normally occur as a result of multiple waves impact, generating a turbulent flow able to nullify frictions. In addition, our work shows that some boulders, previously interpreted as deposited by a tsunami event, have never been displaced in 10 years of monitoring, and so accord with the hypothesis that only tsunamis could be responsible for the deposition of boulders with particular features as dimension and shape.
Geological Settings
Southeastern Sicily ( Figure 1A) is part of the emerged portion of the Pelagian Block, the foreland domain of the Neogene-Quaternary Sicilian Collision Zone [38]. It is mostly formed by the Hyblean Plateau that along the Ionian coastline is dissected by a system of Quaternary normal faults bounding NNW-SSE oriented horst and graben structures [39]. The Maddalena Peninsula, constituted by a Neogene-Quaternary calcareous succession, is one of the horst structures located along the Ionian coast of southeastern Sicily ( Figure 1B). The whole area lies at the footwall of a large normal fault system that, since the Middle Pleistocene, has reactivated the Malta Escarpment [40], a large Mesozoic boundary separating the Pelagian Block from the Ionian oceanic domain to the East [41][42][43].
Southeastern Sicily is one of the most seismically active areas of the central Mediterranean. It is characterized by a high level of crustal seismicity producing earthquakes with intensities of up to XI-XII Mercalli-Cancani-Sieberg (MCS) and M ~7, such as the 1169, 1693, and 1908 events [44][45][46] and related tsunami [15,16,[47][48][49]. Being located along a collisional belt, at the footwall of a normal fault system, the analyzed coastal area has experienced vertical deformation that, combined with sea-level changes, has been recorded by several orders of Middle-Upper Quaternary marine terraces and palaeo-shorelines [40,[50][51][52]. However, since the Late Pleistocene the Maddalena Peninsula ( Figure 1C) was tectonically stable, as inferred from the elevation of the MIS 5.5 terraces [53] but lightly uplifting in the Holocene [54][55][56][57]. According to Anzidei et al. [58], during the last few decades, Global Positioning System (GPS) data and Glacial Isostatic Adjustment (GIA) models indicated current weak subsidence at rates close to 1 mm/yr. This is relevant considering that the area is undergoing heavy coastal retreat and so is exposed to severe storms associated with high-waves, also as a consequence of the global sea-level rise [59][60][61][62].
The Ionian coast of Southeastern Sicily, between the towns of Augusta and Siracusa, is characterized by the occurrence of anomalous calcareous and calcarenitic boulders. Scicchitano et al. [15] performed direct observations on these boulders (distance from the coastline, size and weight), together with statistical analysis of the storm regime of the area and hydrodynamic estimations, to verify if tsunami or storm waves were responsible for their detachment and transport. Radiocarbon age determinations of marine organisms encrusting the blocks, compared to historical catalogues, suggested that, in the last 1000 years, the largest earthquakes with local sources (e.g., the 1169, 1693, and 1908 events) could have triggered tsunami waves that displaced the largest boulders occurring in the area. Other evidence of these tsunami events were found along the coasts of Southeastern Sicily, inside several lagoons [63][64][65] and in coastal deposits [48].
Along Maddalena Peninsula, boulder deposits usually occur on large surfaces gently sloping towards the sea, bordered by high cliffs up to 5 m, formed by Pleistocene depositional terraces or rock platforms dissecting Neogene sub-horizontal, well-stratified, and fractured limestones. The surveillance camera detected boulder displacements inside an ancient Greek quarry located in the northern sector of the Maddalena Peninsula, whose floor has been partially submerged by Holocene sea level rise (up to 40 cm [54]). Boulders located inside the quarry were surveyed with terrestrial laser scanner (TLS) techniques in order to estimate, through the use of a specific hydrodynamic equation [30], the inland penetration limit of the tsunami responsible for the deposition of the block [16]. The two biggest boulders surveyed inside the quarry were represented by a boulder of about 13 ton in weight, previously attributed to a tsunami impact [15], and a boulder of about 41 ton displaced in January 2009 by a storm. Hydrodynamic analyses performed by several authors reveal that coasts of Southeastern Sicily can be affected by severe storm which generate waves able to detach large blocks from the coastal edge and transport them inland [15,16,19,24,66,67].
For the Ionian coast of Sicily, significant spectral wave height (Hm0) and peak period (Tp), recorded during the last 18 years by the Catania buoy (RON-Rete Ondametrica Nazionale; www.idromare.com [68]), are available ( Figure 2A). Wave data recorded by the Catania buoy indicated that the most severe storm, which occurred on 2 February 1996, was characterized by significant spectral wave height (Hm0) of about 6.2 m and peak period Tp = 11.3 s. According to Inglesi et al. [69], the return value Hm0 (50) corresponding to a return period of 50 years in the Catania sector do not exceed 6.24 m. A monitoring operated at the quarry located on Maddalena Peninsula indicated that the main boulder displacements have occurred as result of three distinct storms that occurred in 2009, 2014, and 2018. The events of 2014 and 2018 were two medicanes, called Qendresa and Zorbas, respectively. Qendresa formed on 5 November and rapidly intensified two days later, reaching peak intensity on 7 November. It directly hit Malta in the afternoon and then crossed the Eastern coast of Sicily on 8 November. Later, the cyclone weakened significantly and dissipated over Crete on 11 November. Measurements taken by the ondametric buoy of Crotone and Catania (RON-Rete Ondametrica Nazionale; www.idromare.com [68,70]; Figure 2B), during the passage of Qendresa, show values of significant spectral wave height Hm0 of about 4 m. Medicane Zorbas emerged into the Aegean Sea, moved westward to reach the center of Ionian sea, then inverted its track, moving over northwestern Turkey. Although Zorbas did not affect southeastern Sicily directly, as in the case of Medicane Qendresa, its impact induced similar effects, as recorded by satellite data in off-shore (significant wave height Hs of about 4.1 m) (source AVISO satellite altimetry, credits CLS/CNES [71]). Another relevant effect induced by Zorbas was a storm surge, up to 1m above, detected by the tide gauge sited inside Catania harbor ( Figure 2C) and also observed in the video recorded in the Maddalena Peninsula.
Material and Methods
In 2003, the technicians of the Marine Protected Area of Plemmirio installed 10 surveillance stations along the coasts of Maddalena Peninsula with the aim to detect illegal fishing operations and to monitoring sea conditions. One of these stations was positioned in proximity of the ancient Greek quarry sited in the northern sector of the Maddalena Peninsula ( Figure 3A). The stations are mounted 9 m high on a pole ( Figure 3B) and During 28 September 2019, the camera recorded about 4 h of video during the Medicane Zorbas. Analyses of this video identified 28 boulder movements inside the quarry. Some of these boulders were already present, and they had been attributed both to tsunami and storm events [15]. The others were detached from the submerged area and transported inland during the Medicane Zorbas. In order to proceed with the analysis of the videos extracted from the Control Center Data Base (resolution 1920 × 1080, frame rate of 25 fps, 50 Hz), it has been necessary to reconstruct a detailed and accurate immersive virtual scenario of the quarry and of the boulders, pre and post the impact of Medicane Zorbas.
In 2009, the northern sector of the Maddalena Peninsula was surveyed using Terrestrial Laser Scanner (TLS) techniques in order to reconstruct the morpho-topographic features of the quarry and of the boulders inside it [16]. The survey was performed by scanning the area from four different points, located on the top of the quarry and from two other points sited inside it. The complete TLS dataset was treated and analyzed by using HDS Cyclone software in order to remove any outliers such as vegetation or anthropogenic features. Then, a dense cloud of points was generated permitting the reconstruction of a 3D model of the quarry and of the boulders present in it. Moreover, since the acquisition of TLS data, we started to monitor the boulder positions inside the quarry by surveying after every storm event, through RTK-GPS techniques, benchmarks located on boulders reconstructed with TLS. In January 2015, a few months after the Medicane Qendresa (8 September 2014), we performed UAV photogrammetric surveys of the quarry to detect the position of the boulders displaced during the Medicane. The results convinced us to replace TLS with UAV photogrammetry techniques (cheaper than TLS and with similar resolution and accuracy) to monitor boulder movements occurring inside the quarry. With this aim, we performed surveys in 2016-2018.
A pre-impact virtual immersive scenario has been modelled combining cloud points reconstructed with TLS in 2009 with the cloud points detected with the UAV photogrammetric surveys performed on 2017. This scenario was used to detect in the video morphological features, such as the edges of the quarry, useful as benchmarks to geometrically analyze, through specific software, the images recorded in the video showing boulder movements. To reconstruct the post-impact virtual immersive scenario we performed, three days after the impact of the Medicane Zorbas, an UAV photogrammetric survey ( Figure 3D) of the quarry area together with a proximity photogrammetric survey of the new boulders displaced on the coast.
Geographic Information System (GIS) analyses of the products of previous and recent surveys let us identify most of the boulders appearing in the recorded video (the others have probably been fragmented into smaller blocks). The video has been analyzed by Tracker (https://physlets.org/tracker/ [73]), a video analysis and modelling software built on the Open Source Physics (OSP, Doug Brown, Cabrillo College, California, U.S.) Java framework. Distance between specific features clearly visible in the video, as for example quarry edges, holes, and fractures, were inserted as a spatial reference in Tracker. Once defined, the spatial reference, tagging in each frame of the video objects in motion (boulders, waves, and flows), the software is able to calculate velocity and accelerations. We focused our analyses on the estimation of the flow velocity at time boulder movements occurred.
From the assessment of boulder features, we have calculated, through the hydrodynamic equations of Nandasena et al. [32], the flow needed to start the movements of the blocks analyzed in the videos. Flow values estimated trough video analyses were compared with those calculated with the Nandasena et al. [32] model in order to evaluate the possible discrepancies.
UAV Data and GIS Analyses
The use of Unmanned Autonomous Vehicles (UAV), better known as drones, in various fields of geoscience has increased during the last five years. In coastal geomorphology studies, UAV photogrammetry techniques have been applied for flood estimation [58,74] and for boulder field monitoring [20,21]. Since 2015, the area of study has been seasonally monitored with UAV photogrammetry surveys in order to detect boulder movements inside the Greek quarry sited in Maddalena Peninsula. Considering the large number of boulders displaced during the impact of the storm generated by the Medicane Zorbas, we performed an UAV survey three days after the storm. The survey was performed with a Multicopter NT4 Airvision (Studio Geologi Associati T.S.T., Catania, Italy) (Figure 3), equipped with an High Definition camera (resolution 24Mpx, lens 16 mm, f3, 6), that flew at 30m of altitude, during four distinct flights, with a speed of 1.5 m/s. In order to obtain an accurate georeferencing of the UAV data, we used the ground control point (GCP) net installed in 2015 and composed of 40 benchmarks regularly spaced along the quarry area ( Figure 4A). Benchmarks positions were surveyed with real-time kinematic (RTK) GPS performing 1 h of acquisition for each point of the net. Colored markers, visible from 30 m of altitude, were placed on the benchmarks ( Figure 4B) before realizing the flights. A total of 152 pictures were collected and processed using Agisoft Photoscan Professional software version 1.4.0 (St. Petersburg, Russia) to obtain a high-resolution digital elevation model (DEM, 2 cm grid cell; Figure 5A) and orthophotographs (1 cm/pxl; Figure 5B). performed every year and interpreted in the GIS environment (QGIS) through the digitalization of all the main morphological features such as boulders, fractures, detached part of coastline, score marks and sediment accumulations. These features have been compared with those extracted from the orthophotographs and DEM obtained in the previous year to identify boulder movements or other changes in coastal landscape. As it is possible to observe in Figure 6, several boulders were displaced in the studied area for effects of the impact of Medicane Zorbas; some of these were already in the area and moved from their original position, while others were detached from the infralittoral zone and deposited on the coastline.
Boulders Survey
Once identified in the most recent orthophoto, the new boulders dispersed on the coastline by the impact of Medicane Zorbas have been surveyed with photogrammetry techniques to accurately calculate volume, dimension, and organic encrustations. Pictures were collected with an High Definition camera (resolution 24Mpx, lens 16mm, f3,5-5,6). Around each boulder, 38 benchmarks were positioned on the boulders and surveyed with a total station for georeferencing the reconstructions of the blocks. Pictures and Benchmarks were processed in Agisoft Photoscan Professional software version 1.4.0 ( Figure 7A) to obtain high-resolution 3D surface of the boulders and then post processed in Rhino 6 (Rhino Software 6, Mc Neel Europe, Barcellona, Spain) to close holes and missed surface and generate the final 3d model ( Figure 7B). Determination of bulk density was carried out on rock samples collected from the surveyed boulders. The volume of rock samples was determined through the Instantaneous Water Immersion Method [75]. The mass of each boulder was estimated as the product of boulder's density and volume. Density was assigned to each boulder on the basis of results of bulk density measurements as well as considering its thickness and lithology (Table 1).
Video Editing
More than 7 h of consecutive video recorded on 28 September 2019 were analyzed, detecting 28 distinct boulder movements occurred in the Greek quarry (Table 2). In particular, we focused on five boulders, for which a reliable identification in field and in the orthophoto was possible. Boulders B2, B3, B4, and BN were not present on the coastline before the impact of Zorbas. Boulder K is a very large block, about 41 ton in weight, first displaced by a storm in 2009 and moved again by the two medicanes (2014 and 2018). Videos highlighted the wave impacts on the Greek quarry and showed the boulder movement that occurred in subaerial/submerged scenario. In the video frames, the moment of boulder movements is visible, which occurred with a clear wet surface caused by a continuous wave flow impacting. At the moment of boulder movement, wave flow assumed a turbulent motion with main directions from SE and E, with subsequent backwash flow detected in different directions. Video analysis has been performed by means of Tracker software (Doug Brown, Cabrillo College, California, U.S.) using as metric reference the measures of the quarry borders detected by TLS data and some other specific features recognizable on the bedrock of the quarry (Figure 8). Metric references were selected very close to the boulders displaced in the video in order to avoid issues related to perspective distortion that could induce significant errors in the measurements of flow velocity. Videos were acquired with a frequency of 25 fps; detecting a fixed point on the boulders in each frame, it was possible to assess the wave flow velocity, wave period, wave height, velocity, and acceleration of boulders when they start to move. For boulders B2, B3, B4, and BN, we selected the movement showing highest values of flow velocity as reported in Table 3. Boulder K moved just one time, but, unfortunately, the presence of strong turbulence at that time did not permit us to evaluate the flow for each frame of the video. In this case, velocity was calculated considering distance and time between two clearly visible frames.
Hydrodynamic Models
In the last few years, several hydrodynamic models have been proposed by different authors to describe boulders displacement in coastal areas as result of the propagation of an extreme wave [28,29,31,32,36,76]. Nandasena et al. [32], in particular, elaborated a model to calculate the flow velocity needed to start the movement of a boulder for various dynamics of movement (sliding, rolling/overturning, saltation/lifting) and different pre-settings conditions of the boulder (the joint-bounded-JB, subaerial/submerged-SB). At the present, the main limit for applying Nandasena et al. [32] to field evidence has been that the dynamics of the movements were not certain but reconstructed comparing the final position of a boulder with the previous position.
As exposed in the videos editing, the final position of the boulders, displaced during the Medicane Zorbas, is never related to a single wave impact, but it is the result of several different movements that occurred in different directions and with different amplitudes. In this case, we overcome this problem because, through the recorded video, it was possible to extract, for each movement, certain information about the original position and pre-setting condition of the boulder and the dynamics of the movement. Moreover, from video editing, we obtained values of flow velocity at the time of movement of the boulders that could be directly compared with values estimated by Nandasena et al. [32] model. This model has been applied to the five considered boulders in relation to five specific movements recorded in the video (Table 3), in subaerial/submerged scenario: sliding rolling/overturning saltation/lifting in joint-bounded scenario saltation/lifting where u is the flow velocity needed to start the boulder movement; ρ s is the density of the boulder; ρ w is the density of the water (equal to 1025 kg/m 3 ); g is the gravity acceleration; μ s is the coefficient of static friction; θ is the angle of the bed slope at the pre-transport location in degrees; C d is the drag coefficient (equal to 1.5); a − b − c are the axis of the boulder, with the convention a>b>c; C l is the lift coefficient (equal to 0.178). Although some authors attributed to the wave flow the same density value for the seawater [28,29,31,32], recent work [77] has demonstrated that flow density, resulting from a mix of seawater and sediments, should be estimated as follow: ρ m = (f s * ρ s ) + (f w + ρ w ) (5) where ρ m is the average density of the seawater and sediment mix, fw is the seawater fraction by volume between 0 and 1, fs is the sediment fraction by volume between 0 and 1, ρ w is the density of clear seawater (1.025 g/cm 3 ), ρ s is the sediment density, and fw + fs = 1.
Results
Since 2009, the coast of Maddalena Peninsula has been impacted by several storms, three of which induced boulder displacements inside a Greek quarry and two, in particular, were Medicane events (Qendresa 2014; Zorbas 2018). on 13 January 2009, an intense storm displaced the biggest boulder (K, about 41 ton in weight) about 9 m inland, which was also moved during Medicanes Qendresa and Zorbas (Figure 9). Analyses of orthophotos provided by Regione Sicilia for (2007, 2008) revealed that the boulder K was broken from the coastline between August 2017 and August 2018. Field survey suggested that it occurred as a result of erosional process ( Figure 9B) and not in response to the direct impact of a wave. Although the storm in 2009 was able to induce the biggest displacement to boulder K, if compared with movements during Medicane ( Figure 9C), it did not dislocate many other boulders (only four more small boulders were moved during the storm). In contrast, Qendresa and Zorbas displaced a larger number of small boulders but were able to move boulder K only for short distance (about 1 m for each event). The two medicanes transported inland dozens of small blocks, ranging in weight between 1 ton and 2 ton, eroded from the external edge of the coastline ( Figure 10) and dispersed on the coastal area up to the external walls of the quarry located about 33 m landward. The most relevant boulder deposit was detected in the NW corner of the quarry and it is mainly composed by imbricated boulders lying on a coarse sand of about 40 cm thick. Analyses of the video showed that this berm was the result of boulder accumulation by the action of wave impacts, but mostly due to a turbulent flow, running along the base of the quarry, that moved the boulders along a path parallel to the coastline. The video highlighted that this flow, reaching flow velocity values of up to 4 m/s, was nourished by the strongest waves, hitting the wall of the quarry, and triggered by the topography gently sloping toward the north (about 0.58°). Figure 11). In particular, focus was given on boulders BN, B2, B3, B4, K ( Figure 12) in submerged scenario, selecting the five most clearly visible movements (one video for each boulder, see Table 3 and Supplementary Materials Video S1: Boulder B2; Video S2: Boulder B3; Video S3: Boulder B4; Video S4: Boulder BN; Video S5: Boulder K) in order to estimate the wave flow in combination with type of boulder movement. These boulders present volume values spanning from 0.37 m 3 to 0.83 m 3 with masses estimated between 763 kg and 1689 kg. In each one, sliding, rolling/overturning, saltation/lifting movements were detected by video records in combination with fragmentation as well. For the boulder K, about 41 ton in weight, a movement with a displacement of 1 m between 16:31-16:32 UTC was detected ( Figure 13). Although the great turbulence at the time of displacement did not permit us to calculate flow velocity in each single frame of the video, we estimated the flow considering the time that the wave spent to run across the distance existing between the edge of the quarry and the boulder K. Figure 11. Flow needed to move the boulders assessed with Nandasena et al. [32] relationships and flow obtained by video editing.
Discussions
The Medicane Zorbas caused a very strong storm, characterized by high values of surges and wave height, able to dissipate a lot of flow energy on the coastal zones. The energy dissipated on Maddalena Peninsula was sufficient to move coastal boulders of relevant weight, up to 41 tons. Up to the present, the studies of boulders movements were carried out through a definition of pre and post impact scenarios after the event (e.g., references [19][20][21][22]30,75]), but in this work, for the first time, it was possible not only to directly observe how the boulders movements occur, but also to accurately measure some important parameter, as the flow velocity at time of boulder displacement, useful to compare the hydrodynamic model with natural process. From a general analysis of the videos, although all the displaced boulders were already present on the coastline, 90% of the movement occurred only when the boulders were completely submerged by water flows. Single wave impact is rarely responsible of boulder displacements (three cases out of 28), highlighting that it is not correct to connect unique wave height and boulder movements. According to Cox et al. [34], it appears more reasonable that boulder displacements along the coastline occur due to the effects of multiple waves. In our videos, it is evident that multiple waves set up a continuous and turbulent water flow on the coastal area, causing flooding able to submerge the boulders that start the movement.
During the storm on Maddalena Peninsula, flooding generated by multiple waves impact was amplified by a temporary storm surge inducted by the effects of Medicane. The towns of Marzamemi and Santa Croce Camerina, located about 40 km south of the Maddalena Peninsula, registered severe flooding inside the harbor with a sea level increased up to 1 m. In addition, on 28 September 2018, the tide gauge located inside the harbor of Catania recorded anomalous values of sea level-up to 15 cm above normal ( Figure 2C). This effect was estimated also in Maddalena Peninsula through a comparison between pre-impact immersive scenario, reconstructed from TLS and UAV photogrammetric data, and images extracted from the videos. This analysis let us reconstruct an increase of water column in Maddalena Peninsula of about 50 cm (Figure 14). In any case, the main contribution to the flooding that submerged the boulders in the Greek quarry sited in the Maddalena Peninsula is related to the impact of multiple waves, up to nine associated with the displacement of the biggest boulders, which sometimes reached the highest level of the quarry's wall (4 m height). Furthermore, for the boulder K, which is the biggest displaced by the storm, the movement occurred only when 9 multiple waves (wave heights span 0.22-1.3 m in video records) generated a great flow impacted the shore. For boulder K, the Engel and May [36] relationship provides wave height at breaking point needed to start the movement, equal to 11.56 m, much greater than what recorded both by satellite data in off-shore (Hs of about 4.1 m) (source AVISO satellite altimetry, credits CLS/CNES [71]) and by video analysis on the shore-platform.
In the videos, we detected 28 distinct boulders movements; for 10 of these, it was not possible to calculate flow velocity because, at the time of their occurrence, turbulence was too intense to detect certain points in the video images. Moreover, boulders B1, B1_F, and BT were not recognized in field, so we cannot apply the Nandasena et al. [32] model to these boulders. For these reasons we focused our analyses on four little boulders (B2, B3, B4, BN), for which movements occurred through dynamics of sliding, rolling/overturning and saltation/lifting, and one big boulder of 41 tons (boulder K), displaced by a turbulent wave flow. Knowledge of the boulder features, obtained by TLS (K) and photogrammetric (B2, B3, B4, BN) surveys, permitted us to assess the flow needed to displace the boulders for each three types of movements by means of the hydrodynamic equations of Nandasena et al. [32], while the video records allowed us to determine the wave flow velocity at the moment of starting movement with software Tracker. Flow velocity assessed through software Tracker is subjected to relative error dependent on perspective distortion and frames sampling. This error ranges in value between 15% in case of clearly visible flows and 50% in case of turbulent flows that do not permit to identify reference points in the frame of the video. Although the use of a flow meter located inside the quarry could contribute to better evaluate flow velocity, the absence of solid points to anchor the instruments does not permit us to use this technique. For this reason, video editing was the most suitable tool for estimating the wave flow velocity in this case of study.
We selected for each boulder the movements with the highest value of flow velocity measured in the video and compared these values with those calculated with the Nandasena et al. [32] model. Comparison shows that the models provide flow values greater than those measured on the video at the starting movements. The discrepancy is probably related to the fact that the Nandasena et al. [32] model considers equal the flow velocity required to move a boulder in a subaerial and submerged scenario, while evidence highlighted from the analyses of the videos suggested that it is easiest for multiple waves to move a submerged boulder. This could be explained considering that a wave flow able to move a boulder owns a density that is different than normal seawater because it is made by a mix of seawater and sediments. In the literature, the standard density value of seawater is attributed at the wave flow, while the recent work of Terry and Malik [77] has demonstrated that the correct density value is due to the mix of seawater and sediments.
We made different evaluations of the flow velocity based on the Nandasena et al. [32] model considering the different ρ m values ( Table 4) in function of sands fraction. Although assessment of model flow, considering a 20% sand fraction with 80% of seawater fraction, shows a better fitting between calculated and observed flow velocity values in hydrodynamic relationships, and they still show an important disagreement ( Figure 15). Another parameter that has probably to be redefined is the lift coefficient. The friction forces encountered by the wave flow are conditioned by this coefficient that in literature is equal to 0.178. Recent work by Rovere et al. [37] and Cox et al. [34] show that the value of lift coefficient used in literature was underestimated. We used Nandasena et al. [32] equations to estimate the value of lift coefficient necessary to fill the gap existing between flow velocities calculated, included of density correction, with those observed in the video. In the specific case of the Maddalena Peninsula, we estimated a corrected lift coefficient equal to 1.4. Another consideration on Boulder A ( Figure 9A,B) located in the Southern area of the quarry. This boulder of about 19 ton in weight was interpreted in Scicchitano et al. [16,54] In the same period Boulder K, that is double in weight, was displaced at least two times ( Figure 9). Analyses of the videos, together with comparison of the 3D model of the boulders, suggested that the axis orientation and the contact surface geometry between boulders and bedrock play an important role in the displacements of boulder. Boulder K exposed its longer axis to the main wave impact direction (from East), while Boulder A face has been showed the same direction with the shorter axis. Moreover, the face of the Boulder A, directly in contact with the surface of the quarry, is flat and very regular (it was probably part of the quarry detached and transported inland by an extreme event), resulting in high value of friction. In contrast, the surface of Boulder K is heavily karstified and exposes several erosional polls up to 1 m of diameter and 40 cm depth. These characteristics led to the water flow, induced by the impact of multiple waves, to generate turbulences below Boulder K that are minimized for Boulder A. These considerations support the hypothesis that a tsunami could be reasonably responsible of the displacement of Boulder A [15,16]. Last but not least, analyses of the three storms that impacted Maddalena Peninsula in 2009, 2014 and 2018, suggest that, although medicanes generated wave heights compatible with the annual storm regime of the area (Figure 2A), their impacts produced major effects. Qendresa and Zorbas displaced dozens of blocks up to 2 ton in weight in the quarry of Maddalena Peninsula; most of these were eroded from the coastline and transported landward. The 2009 storm dislocated only one boulder that, although is the biggest located inside the quarry, at that time, it was lying very close to sea in a very unstable position ( Figure 9). Major effects produced by medicanes could be explained by the storm surges (up to 1 m in the case of Zorbas) that they induced on coastal area as registered by the tide gauge of Catania and observed in the video recorded on Maddalena Peninsula. As a consequence, coastal areas are probably more vulnerable to the impact of medicanes than to common storms, and this appears more significant when taking into account the theories claiming that, in the future, medicanes will increase in strength in response to climate changes [12,13,78].
Conclusions
The dynamics of boulder displacements, in response to the impact of a storm event, are always a difficult aspect to study due to the lack of evidence about the modality of the movements. Although surveys pre and post events allow analysis the entity of boulders displacements, video records represent the best solution for a deeper comprehension of the dynamics correlated to this natural process. In this study, we present the first known video records of boulder displacements, which occurred inside an ancient costal quarry located in the Maddalena Peninsula (Siracusa). Using different techniques of survey and remote sensing, we reconstructed pre and post immersive virtual scenario that were used to geometrically analyze the video and to calculate flow velocity and wave heights at the time of boulder displacements. General observations, together with comparison between data measured in the video and calculated with hydrodynamic model, suggested us the following conclusions: (1) Boulder displacements occur mainly in submerged scenario, as consequence of flooding generated by the impact of a series of waves. Although some hydrodynamic models equal the flow velocity necessary to move a boulder in sub-aerial and submerged scenario, our evidence suggest they should probably be described with different specific equations. (2) Movements occur for the impact of multiple small waves rather than of a singular big one; in any case, the possibility that a singular big wave could displace a boulder is not excluded at all, but it is probably more attributable to tsunami events. Multiple waves generate a continuous flow that nullifies friction forces, triggering the boulder displacements. Modelling through the Engel and May [36] approach results in wave height values (11.56 m) much higher than those recorded by satellite data in off-shore (Hs of about 4.1 m) and by video analysis on the shore platform (0.22-1.3 m). This confirms that single impact models provide values of wave heights strongly overestimated, respect to the natural process. [77] and a different value of lift coefficient according to Rovere et al. [37] and Cox et al. [34]. (4) Considerations on a big boulder not displaced by the impact of the storm in 2009 or by the impact of the two medicanes-Qendresa (2014) and Zorbas (2018)-suggest that a tsunami could be reasonably responsible for the deposition of the boulders [15,16]. This seems to confirm that, although some authors considered overestimated the number of tsunami reconstructed for Mediterranean Sea [25], it is possible to define field evidence and methodological analyses to discern tsunami and storm events [26].
Analyses of the monitored data suggest that medicanes produce greater effects than common storms; this is probably due to the flooding they induce on the coastal areas and represents an important aspect to investigate to properly assess the vulnerability of the coasts. | 9,441.6 | 2020-05-23T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Conformational Changes in a Pore-lining Helix Coupled to Cystic Fibrosis Transmembrane Conductance Regulator Channel Gating*
Cystic fibrosis transmembrane conductance regulator (CFTR), the protein dysfunctional in cystic fibrosis, is unique among ATP-binding cassette transporters in that it functions as an ion channel. In CFTR, ATP binding opens the channel, and its subsequent hydrolysis causes channel closure. We studied the conformational changes in the pore-lining sixth transmembrane segment upon ATP binding by measuring state-dependent changes in accessibility of substituted cysteines to methanethiosulfonate reagents. Modification rates of three residues (resides 331, 333, and 335) near the extracellular side were 10–1000-fold slower in the open state than in the closed state. Introduction of a charged residue by chemical modification at two of these positions (resides 331 and 333) affected CFTR single-channel gating. In contrast, modifications of pore-lining residues 334 and 338 were not state-dependent. Our results suggest that ATP binding induces a modest conformational change in the sixth transmembrane segment, and this conformational change is coupled to the gating mechanism that regulates ion conduction. These results may establish a structural basis of gating involving the dynamic rearrangement of transmembrane domains necessary for vectorial transport of substrates in ATP-binding cassette transporters.
ATP-binding cassette (ABC) 2 transporters are a large family of integral membrane proteins that actively transport a broad range of substrates across cell membranes. Despite their diverse functions, they share a common basic architecture comprised of two transmembrane domains (TMDs) that function as a pathway for the permeation of substrates and two cytoplasmic nucleotide-binding domains (NBDs). The highly conserved NBDs are the molecular motors that transform the chemical potential energy of ATP into conformational changes that drive substrate molecules through the TMDs (1). Recent biochemical, structural, and genetic studies have led to a common mechanism in which ATP binding and hydrolysis induce the formation and dissociation of an NBD dimer, respectively. This regulated switch induces conformational changes in the TMDs to mediate vectorial transport of substrates across cell membranes (2)(3)(4). However, the structural bases for the propagation of conformational changes in the NBDs to the TMDs, and the conformational changes in TMDs are not well understood.
Cystic fibrosis transmembrane conductance regulator (CFTR; ATP-binding cassette transporter subfamily C member 7), the product of the cystic fibrosis gene, is unique among ABC transporters in that its TMDs provide a conductive channel for anions. Phosphorylation of serines in the regulatory domain by cAMP-dependent protein kinase activates CFTR (see Fig. 1A). Once phosphorylated, ATP-induced dimerization of NBDs opens the channel, and their subsequent dissociation upon ATP hydrolysis closes the channel (5). Despite extensive biochemical, structural, and functional studies, the nature of conformational changes in TMDs associated with CFTR channel gating remains elusive.
The structure of the pore of CFTR is poorly understood. It is not known how many of the predicted 12 transmembrane segments contribute to formation of the pore. Nevertheless, several studies have suggested that the sixth transmembrane segment (TM6) in TMD1 plays a key role in the pore structure and determining its functional properties (6 -8). The positively charged residue, Arg 334 , in the putative outer mouth of the pore, facilitates the entry of Cl Ϫ ions into the pore (9,10), and the side chain of Thr 338 , located one helical turn away, lies in the pore (11,12). We have investigated the structure of TM6 and probed for its conformational changes during channel gating using the substituted cysteine accessibility method (13). Each residue in and flanking TM6 (amino acids 325-353) was replaced individually with cysteines, and the rates of modification by water-soluble thiol-reactive reagents were assessed in different channel gating states. State-dependent differences in the effects of Cd 2ϩ and reactivities of sulfhydryl reagents were interpreted as reflecting changes in the local environment and accessibility of the substituted cysteines to the aqueous phase. During channel opening, there is a structural rearrangement in TM6 that results in decreased accessibility of three residues near the extracellular end to the aqueous phase. This conformational change is local, because it does not affect nearby porelining residues, and furthermore, it is required for the channel to open. These results establish a structural basis for CFTR gating involving ATP-induced conformational changes in TMDs, which may be relevant for other members of the ABC transporter family.
EXPERIMENTAL PROCEDURES
Molecular Biology-Mutations were constructed in pSP-CFTR (14) plasmid containing the cDNA of human CFTR, using the QuikChange site-directed mutagenesis kit from Stratagene (La Jolla, CA). Each mutation was confirmed by sequencing. Each cDNA was linearized and transcribed using a SP6 promoter based in vitro transcription method (Ambion, Austin, TX). For channel expression, Xenopus oocytes were injected with cRNA, stored at 18°C, and used for recordings 2-5 days after injection. All of the chemicals were purchased from Sigma-Aldrich unless otherwise stated.
Electrophysiology-Conventional two-electrode voltageclamp methods were used to measure membrane currents in CFTR expressing oocytes, using an OC-725C oocyte clamp amplifier (Warner Instruments) connected to a computer via an ITC-18 interface (Instrutech Corp, Elmont, NY). Single oocytes were placed in a chamber (25 l of volume) containing LCa96 (96 mM NaCl, 1 mM KCl, 0.2 mM CaCl 2 , 5.8 mM MgCl 2 , 10 mM HEPES, pH 7.5, by NaOH) and continuously perfused at the rate of 1.5 ml/min. Pulse software (HEKA Electronics, Inc.) was used to ramp the applied transmembrane potential (V m ) at regular intervals. V m was clamped at Ϫ20 mV in between the voltage ramps. Transmembrane current (I m ) and V m were digitized at 50 Hz during the voltage ramps and written directly onto hard disk. In data analysis with Igor Pro (Wavemetrics, Lake Oswego, OR) software, a fifth order polynomial was fitted to the raw, monotonically increasing I m Ϫ V m data from each voltage ramp. The whole oocyte membrane conductance and the reversal potential (V rev ) were evaluated simultaneously as the slope (dI m /dV m ) and the x-intercept of the polynomial, respectively. CFTR channels were stimulated by external application of forskolin (10 M) and IBMX (1 mM or 20 M) in LCa96. V m was ramped from Ϫ60 to 20 mV in 5 s and repeated at 10-s intervals, and the change in conductance was evaluated at Ϫ20 mV. In experiments designed to measure kinetics, V m was ramped from Ϫ40 to 0 mV in 950 ms and repeated at 1-, 2-, or 5-s intervals depending on the rate of modification.
For patch clamp experiments, the oocyte vitelline membranes were removed manually, and the oocytes were transferred to a recording chamber containing standard bath solution. The pipette and bath solution contained 138 mM N-methyl D-glucamine, 2 mM MgCl 2 , 5 mM HEPES, 136 mM HCl, pH 7.4, with HCl. 20 -30 G⍀ seals were obtained by gentle suction. Single CFTR channel currents were recorded in cellattached configuration, at a pipette potential of Ϯ100 mV via an Axopatch 200B amplifier, filtered at 100 Hz, digitized online at 500 Hz using an ITC-18 board, and recorded on disk by Pulse software. CFTR channels were activated by forskolin (10 M) and IBMX (1 mM). The experiments were performed at room temperature. Current records of at least 5-min duration were idealized using half-amplitude threshold crossing for each experimental condition by TAC software (Bruxton, Seattle, WA) for P o evaluation. The total number of channels in a patch was assumed to be the maximum number of open channel current levels observed over the full duration of the experiment (5-15 min). The data were fitted and modeled using Igor Pro (Wave Metrics, Lake Oswego, OR).
For kinetic analysis, the current records were filtered digitally at 50Hz and idealized using half-amplified threshold crossing, with imposition of fixed dead time of 6.5 ms. Events lists were fitted with a simple model in which all principal gating transitions were pooled into a closed-open scheme, and flickery closures were modeled as pore blockage events resulting in the three state closed-open-blocked scheme (C-O-B). Rate constants rCO, rOC, rOB, and rBO were extracted by a simultaneous fit to the dwell time histograms of all conductance levels, as described (15). The mean interburst and burst durations were then calculated as ib ϭ 1/rCO and b ϭ (1/rOC)(1 ϩ rOB/ rBO), respectively. The data are presented as the means Ϯ S.E. Student's unpaired (2-tailed) t test was used to determine the significance (p Ͻ 0.05).
Cysteine Modification-Stock solutions of 100 mM MTS reagents (Toronto Research Chemicals, North York, Canada) were made, and aliquots were frozen at Ϫ80°C. For every experiment, single aliquots were thawed and diluted in LCa96, or in LCa96 containing forkolin and IBMX to the indicated concentration and used immediately. The effects of externally applied MTSEA, MTSET, MTSES, and Cd 2ϩ on whole oocyte conductance, and the percentage of change in conductance was calculated for each oocytes. MTS reagents were applied for 3-8 min until modification reached a steady state. The whole cell conductance was plotted as a function of cumulative time and fitted by a single exponential function. The MTS concentration was raised to 1 mM to measure slowest rates and lowered to 100 M and to 10 M to measure the fastest rates. The lowest MTS concentration sufficient to produce a measurable change in conductance (within 3 min) was used to calculate the apparent second order reaction rate constant (k ϭ 1/([⌴TS])1).
Statistical Methods-One-way analysis of variance with posthoc Donnet's test (PrismGraphPad) was used to assess the statistical significance of any change in cAMP-activated conductance following exposure to Cd 2ϩ and MTS reagents (see Fig. 1C). A p value Ͻ0.01 was the threshold for statistically significant effect. The data are presented as the means Ϯ S.D. unless specified otherwise. Student's unpaired (two-tailed) t test was used to assess the statistical significance for all other experiments.
RESULTS
Identification of Substituted Cysteines Accessible to Cd 2ϩ and MTSEA-In this study, each of 29 consecutive residues between amino acids 325 and 353 comprising extracellular loop 3 and TM6 (Fig. 1A) was substituted one at a time with cysteine (except for native Cys 343 ), and the mutant channels were expressed in Xenopus oocytes. Twenty-six of the 28 mutants produced CFTR Cl Ϫ currents following activation with a mixture of forskolin (10 M) and IBMX (1 mM). We examined the effects of two thiol-reactive agents on channel function: Cd 2ϩ , a cation capable of binding to SH groups, and MTSEA, a partially ionized primary amine. We assumed that Cd 2ϩ only binds to those cysteines with side chains in a water-accessible surface, whereas MTSEA might also modify residues partially buried in the membrane (16). Applications of dithiothreitol, Cd 2ϩ , or MTSEA (all reagents at 1 mM) were without significant effect on the whole cell Cl Ϫ conductance of oocytes expressing wild type CFTR (Fig. 1B). The positively charged residue Arg 334 influences Cl Ϫ permeation properties and is therefore expected to be near the aqueous pore (9,10). Application of Cd 2ϩ to activated R334C CFTR reduced whole cell Cl Ϫ conductance by Ͼ80%, with the inhibition completely reversible upon Cd 2ϩ removal (Fig. 1C). A brief application of dithiothreitol accelerated reversal associated with the removal of Cd 2ϩ . Subsequent application of MTSEA sharply and irreversibly increased the Cl Ϫ conductance, confirming previous studies (10). Subsequent application of Cd 2ϩ was without further effect, suggesting that both MTSEA and Cd 2ϩ reacted with the same cysteine, viz. Cys 334 , and its complete modification by MTSEA abolished the binding of Cd 2ϩ . Representative experiments of other Cys-substituted CFTR mutants can be found in supplemental Fig. S1. Fig. 1D summarizes the effects of external Cd 2ϩ and MTSEA on the whole cell conductances of various cysteine-substituted CFTR mutants. Both Cd 2ϩ and MTSEA had significant effects on the conductances of only five (I331C, L333C, R334C, K335C, and T338C) of the 26 Cys-substituted channels examined. Cd 2ϩ had a small but significant potentiating effect on K329C channels, but MTSEA, which by itself is without any significant effect, was able to abolish the potentiating effect of Cd 2ϩ (supplemental Fig. S2). Surprisingly, MTSEA did not have a functional effect on any residues that were not identified by Cd 2ϩ reactivity , even though it can permeate the membrane to reach residues on the other side of the ion permeation gate, as evidenced by its modification of K95C, a residue that is on the cytoplasmic side of the channel (17,18). These experiments were performed on activated channels that undergo rapid transitions between the closed and open states. Thus, the observed reactivities represent composite of reactivities with closed and open channel conformations. The thiol side chains of these residues are presumably exposed to the aqueous phase during part or all of the gating cycle, and their modification by Cd 2ϩ and MTSEA affected either the single-channel conductance or channel gating.
Accessibility of Substituted Cysteines to MTSEA in the Inactive Closed State-To determine whether there was a state dependence to the observed reactivities, we examined the reactivity of these five residues (residues 331, 333, 334, 335, and 338) to MTSEA applied to the external side of the channel in the closed state. Before activation of CFTR by the cAMP mixture, the conductance of CFTR-expressing oocytes was very low and comparable with control water-injected oocytes. The oocytes Y325C A326C L327C I328C K329C G330C I331C I332C L333C R334C K335C I336C F337C T338C T339C I340C S341C F342C WT I344C V345C R347C M348C A349C V350C T351C ). An asterisk indicates mutants for which the change was significantly different for both Cd 2ϩ (green bar) and MTSEA (red bar) from WT by one-way analysis of variance (p Ͻ 0.01).
were exposed to MTSEA (1 mM) for 3 min before cAMP stimulation, when the channels were in the inactive closed state.
The MTSEA was then washed away, and the channels were subsequently activated by perfusion of the cAMP mixture. The effect of Cd 2ϩ on whole cell Cl Ϫ conductance was then assayed. We reasoned that if a cysteine side chain was not accessible to MTSEA in the channel closed state, the activated channel would retains its sensitivity to Cd 2ϩ , whereas if the cysteine was modified by MTSEA in the closed state, then subsequent application of Cd 2ϩ to activated channels would be without effect on whole cell conductance. An example of this protocol applied to R334C-CFTR expressing oocytes is shown in Fig. 2A. The normal inhibitory effect of Cd 2ϩ was nearly absent for channels that had been pre-exposed to MTSEA in the closed state. The magnitude of the loss of Cd 2ϩ sensitivity was similar to that observed for activated R334C-CFTR that had been modified by MTSEA (e.g. Fig. 1C). The results for all five Cys-substituted channels are summarized in Fig. 2B. Strikingly, for all the five Cys-substituted channels that were pre-exposed to MTSEA in the closed state, the normal inhibitory effect of Cd 2ϩ was either absent or substantially reduced. These results suggest that the thiol side chains of cysteines in these five positions are accessible to MTSEA in the inactive closed state. Functional Effects of Cd 2ϩ and MTSEA on Substituted Cysteines in E1371Q-CFTR-To study the accessibility of substituted cysteines in the open state, the effects of Cd 2ϩ and MTSEA were examined for CFTR channels bearing the E1371Q mutation. This Glu to Gln mutation in NBD2 prevents ATP hydrolysis without affecting ATP binding and stabilizes the open state of the channel by almost 1000-fold compared with WT-CFTR (19,20). Because the average burst duration of this mutant is 7-8 min (19) ( Fig. 3B). MTSEA had a small potentiating effect on K335C in the wild type channel background, whereas it was inhibitory in the "locked open" 1371Q channels (Fig. 3D). In contrast, the pore-lining residues R334C and T338C exhibited very small or no differences in their functional effects in the Glu 1371 and Gln 1371 channels, respectively. The differences between Glu 1371 and Gln 1371 backgrounds in the effects of Cd 2ϩ and MTSEA on I331C, L333C, R334C, K335C, and T338C channels are summarized in Fig. 3 (C and E), respectively. The differences in the magnitude of MTS modification suggest that in E1371Q background there is a significant change in TM6 structure. This structural change, which manifests as a change in the efficacy of MTS modification, affects some residues but has no effect on nearby pore-lining residues.
Effect of E1371Q Mutation on Thiol Modification Rates-To obtain quantitative information about changes in reactivity, the rates of substituted cysteine modification were calculated by fitting the time course of Cl Ϫ conductance modification with a single exponential, which was then used to calculate second order reaction rate constants for various MTS reagents. This analysis was carried out in both Glu 1371 (WT) and Gln 1371 backgrounds ( Fig. 4A; summarized in Fig. 4 (B and C); MTSET data can be found in supplemental Fig. S4). The cysteine residues R334C and T338C, postulated to be pore-lining residues, showed no changes in their rates of modification by either MTSEA or MTSES. In contrast, I331C, L333C, and K335C reacted faster in the Glu 1371 background (Fig. 4, B and C). These results reveal clearly that modification of I331C, L333C, and K335C by both these reagents was much slower in the Gln 1371 mutational background than in the WT Glu 1371 channels. The modification rates of both positively and negatively charged reagents were reduced, indicating that the slower reactivities were due to steric effects rather than electrostatic ones. The difference in reaction rates between WT and Gln 1371 channels was greatest for K335C, which reacted nearly 800 times more slowly in E1371Q background. L333C showed a relatively smaller 10-fold decrease, and I331C reacted 5-fold slower (Fig. 4, B and C) in E1371Q channels. Because activated WT channels (Glu 1371 background) are in the open state for a significant fraction of time (P o Ϸ 0.2), the measured reaction rates in the WT channel background represent weighted sums of the open and closed state reaction rates. Because the reactivity is much slower in the channel open state (Gln 1371 ), the measured rates in wild type channels (Glu 1371 ) therefore underestimate the true closed state reaction rates. Thus, the differences in reactivities between the closed and open states are likely to be larger than measured differences between Glu 1371 and Gln 1371 channels. Alternatively, it is possible that the observed reaction rates in E1371Q mutational background are due to a change in the channel structure induced by the mutation and not related to structural changes associated with open channel state. Modification Rates Depend on CFTR Channel P o -In Xenopus oocytes, the cAMP-mediated activation of CFTR occurs by means of an increase in the open probability of CFTR channels (21). To confirm that the modification rates depend on the channel P o , we examined the effects of MTSEA and MTSES on oocytes under two different activating conditions. In the presence of 0.02 mM IBMX, the whole cell conductance of oocytes is only 20 -30% of the conductance elicited by 1 mM IBMX (Fig. 5, A and B). This decreased conductance is due to a proportional decrease in the channel P o as a smaller fraction of the total channels is active at any given time. Comparisons of the effects of MTSEA and MTSES on Cys-substituted channels under minimal (0.02 mM IBMX) and maximal (1 mM IBMX) activation conditions are summarized in Fig. 5C. All of the residues except L333C had varying but significant differences in the magnitude of the functional effect by MTS reagents. For example, under minimally active conditions, the stimulatory effect of MTSEA on R334C and K335C conductance was greater than under maximally active conditions. MTSES, however, had a smaller inhibitory effect on R334C and K335C when minimally activated. For I331C, the inhibitory effect of both MTSEA and MTSES was larger under minimally active conditions. The observed differences in the magnitude of the functional effects suggest that conformation of these residues changes with the open probability of the channel.
The quantitative differences in reactivity under these two stimulatory conditions were determined by measuring the kinetics of their modification as described before. Under minimal activation conditions (0.02 mM IBMX), the cysteine residues R334C, K335C, and T338C showed no significant differences in their modification rates by either MTSEA or MTSES (Fig. 6). However, I331C and L333C channels had a significantly faster modification rate, when minimally active. When stimulated by 0.02 mM IBMX, both I331C and L333C reacted nearly 25 times faster with MTSEA and nearly 10 -20 times faster with MTSES. These results suggest that when CFTR Channel P o is low, residues I331C and L33C react quite rapidly with MTS reagents, and as the P o increases their reactivity decreases correspondingly.
Evidence for TM6 Movement Associated with Channel Gating-The state-dependent reactivity of the MTS reagents with I331C, L333C, and K335C channels could indicate a change in the water accessibility of these residues caused by a conformational change in TM6 or by an alteration in the local environment surrounding these residues. We reasoned that if movement of TM6 residues from a hydrophilic to a less hydrophilic environment was coupled to channel opening, then introduction of a charged residue might interfere with such movements and therefore affect channel opening rate. We therefore investigated the effects of MTS modification on CFTR channel gating. We used the bulky, positively charged MTSET to maximize possible functional effects of modification on gating and to limit the possibility that the reagent might cross the membrane. For two of the three mutants, I331C and L333C, modification with MTSET profoundly affected channel gating (Fig. 7A). The open probabilities of MTSET-modified I331C and L333C channels were significantly smaller than those of unmodified channels. However, the single channel conductance was not affected in either channel (data not shown). In contrast, MTSET was without significant effects in both WT and K335C channels on either P o or single channel conductance. These results indicate that MTSET reduces the whole cell conductance of I331C-and The deposition of a charge by MTSET modification at position 331 or 333 decreased the channel opening rate, consistent with the possibility that channel opening involves movement of these residues from an aqueous to a hydrophobic environment. The slowing of channel opening rate could be explained if the positive charge at position 331 or 333 helps stabilize the channel-closed state (aqueous environment) and destabilize the transition state (hydrophobic environment) for the channel opening reaction leading to an increase in activation energy for channel opening, hence decreasing the opening rate.
DISCUSSION
In this study, we investigated the architecture and rearrangement of the TM6 helix of CFTR. We examined the state-dependent reactivity of each residue with MTS reagents. We also studied how gating and permeation properties of CFTR were affected by site-specific modifications. Finally, we interpret our results in terms of the known structures of ABC transporters.
The substituted cysteine-accessibility method assumes that only cysteine residues at a water-accessible surface of the protein will react with hydrophilic MTS reagents and that the modification produces an irreversible change in channel function. However, it is possible that water-accessible cysteine residues will not react with MTS reagents because of an unfavorable electrostatic environment rather than a lack of accessibility (22). Moreover, modification of cysteine residues that fail to alter channel function will be undetected. We believe that K329C, whose whole cell conductance was stimulated by Cd 2ϩ , is an example of one such residue that reacts with MTSEA, but the modification is without effect on channel function (supplemental Fig. S2). In our study, we identified a cluster of five residues near the extracellular end of TM6 that were accessible to MTS reagents. Our results have similarities and differences with a previous substituted cysteine accessibility study of CFTR (17). Although both studies identified I331C, L333C, R334C, and K335C as accessible to MTS reagents, we find that MTSEA increased the conductance of R334C-and K335C-expressing oocytes, whereas it was reported in the previous study to decrease channel currents. That study did not observe any effects of MTSEA on T338C channels, whereas we did here. Finally, the MTSEA reactivity was restricted to only five of twenty-six residues in and flanking TM6 in our study, whereas in the earlier study, residues F337C, S341C, I344C, R347C, T351C, R352C, and Q353C were also shown to be accessible to MTS reagents. The reasons for these discrepancies are unclear. However, our observations on the accessibility of R334C, K335C, and T338C and the inaccessibility of R347C are consistent with other studies (10,11). Furthermore, we observed similar reactivities of these substituted cysteines in a Cys-less CFTR background, suggesting that the introduced cysteine is the site of modification (supplemental Fig. S5).
State-dependent Reactivity Reflects a Local Rearrangement and Exposure of the Side Chains-We observed large differences in the rates of cysteine modification at three (residues 331, 333, and 335) of the five residues in the E1371Q background. It is possible that this mutation rather than the open state has altered the conformation of the thiol side chains to affect their reactivity. Furthermore, when channel P o was modulated independently by varying IBMX concentrations, only two residues, 331 and 333, were affected in their reactivity. Therefore, we cannot exclude the possibility that the combination of these two mutations, K335C and E1371Q is responsible for the observed changes in reactivity of K335C. We propose that the large differences in reaction rates observed between the closed and open channel states result from conformation-dependent changes in the position of the thiol side chains of the substituted cysteines. In the channel closed state, the side chains of residues 331 and 333 are exposed to the aqueous environment that enables a fast reaction with modifying reagents; upon channel opening, these side chains are hidden from the aqueous phase and hence react poorly with the thiol specific reagents. Alternatively, the change in reaction rates associated with channel opening might be caused by a change in the pK a of the thiol side chains, which decreases the concentration of the reactive thiolate anion (23). If this were the case, then the reactivities with positively charged (SEA and SET) and negatively charged (MTSES) MTS reagents would be affected quite differently (11). However, we observed that all of these reagents reacted faster in the closed state. Thus, state-dependent changes in the pK a of these residues are unlikely to account for the observed changes in reactivities during gating. Furthermore, the pore-lining residues R334C and T338C showed no state-dependent changes in reactivity, which also suggests that there are no significant changes in the local elec-trostatic potential during channel gating. It must be pointed out that under low IBMX concentrations, a 5-fold decrease in CFTR P o cannot account for the entire differences in reactivity of I331C and L333C. High concentrations of IBMX (K i ϭ ϳ10 mM) are known to have an inhibitory effect on CFTR (24), most probably by decreasing its single-channel conductance (25). Hence, a small fraction of the increased reactivity of I331C, and L333C at low IBMX concentrations could be due to a relief from this block, although such an increase in reactivity is not observed for R334C and T338C. In addition, under low IBMX concentrations, CFTR is probably partially phosphorylated, resulting in decreased channel activity. Phosphorylation-dependent changes in CFTR channel structure could also account for some of the changes in reactivity.
Zhang et al. (26) reported that the rate of MTSET modification of R334C-CFTR expressed in oocytes was monotonic in the WT background but followed a bi-exponential decay because of an additional slower (nearly 20 times slower) component in the K1250A background. The authors suggested that the slower reactivity in the open state that is stabilized in the K1250A mutant channel was due to changes in the local electrostatic environment (see above). In contrast, our results show that the reactivity of R334C does not exhibit state dependence either under varying activation levels or in the E1371Q channel. The reasons for this disparity are not due to differences in recording and perfusion techniques (patch clamp versus two-electrode voltage clamp; fast-perfusion of patches versus perfusion of whole oocytes), because similar rate constants for MTSET modification (ϳ10 4 M Ϫ1 s Ϫ1 ) were observed in both studies. A notable difference between the two mutations is that the Walker A mutation K1250A, unlike E1371Q, decreases the ATP binding affinity of NBD2 (27), which profoundly reduces the channel opening rate (28,29) in addition to decreasing the closing rate (30) of CFTR. Hence, the slower modification rate observed in the earlier study (26) may be specific to K1250A and not a general characteristic of the open state.
The Conformational Change at the Outer Mouth Is Local and Is Coupled to the Gating Mechanism-Channel opening appears to be associated with a conformational change near the extracellular end of TM6 that moves the exposed side chains of three amino acids away from the aqueous phase. However, these conformational changes are local, because residues 331, 333, and 335 undergo changes in MTS accessibility, whereas residues 334 and 338 do not. Therefore, we propose that the conformational change in TM6 associated with channel open- ing is localized to the stretch of amino acids between Lys 329 and Lys 335 .
What is the significance of the apparent change in conformation during channel gating? Does it represent the movement of the actual gate that opens the channel, or does it represent a local conformational change that is coupled to channel gating but it not causally involved in channel opening? Because all of these five residues are accessible from the extracellular side in the closed state, these residues may be more peripheral to the gate that physically restricts access to the intracellular side. The precise identity and motion of the intracellular gate is unknown, but MTS reagents and Cd 2ϩ binding to introduced cysteines either at residues Ile 331 or Leu 333 interfered with its motion, as evidenced by reduced channel opening rate. We suspect that restricting the movement of these two residues prevents the gate from opening, either by interfering with its motion directly or by stabilizing the closed state allosterically. Resides Ile 331 and Leu 333 may lie on the moving part of the gate or alternatively be contained in a region involved in a multi-step gating reaction that occurs before the movement of the gate. In contrast, the observed conformational change at residue Lys 335 does not appear to be tightly coupled to the gate, because deposition of positive charge there had minimal effects on gating. Nevertheless, Lys 335 is near the conduction pathway and can influence Cl Ϫ permeation via electrostatic effects, because the addition of negative charge (via MTSES Ϫ ) to this position inhibited channel conductance. The results from anion substitution experiments also support a similar conclusion (12,31).
Structural and Functional Implications-Thermodynamic analysis of CFTR channel opening suggests that the transition state for channel opening is represented by a strained molecule in which the NBD dimers have already formed but the pore remains closed (32). The movement of hydrophobic side chains of Ile 331 and Leu 333 in to a hydrophobic environment might provide the required relief to open the channel. The observed differences in reactivities between the closed and open state conformations were restricted to a small stretch of amino acids near the extracellular end of TM6. Importantly, we observed no pattern of reactivity in the channel open state that was complementary to the pattern seen in the closed state. A complementary pattern might be expected if the closed to open state transition involved a rigid body rotation of TM6 around its long axis, thereby exposing the face of the helix buried in the closed state to the aqueous phase in the open conformation. In P-glycoprotein, large helical rotations were suggested to account for the extensive rearrangements of transmembrane helices observed upon ATP binding (33,34). Instead, our results imply that relatively modest changes in the overall architecture of the channel may be associated with the transition between closed and open states of CFTR. Structural dynamics studies of MsbA during ATP hydrolysis have also ruled out large rigid helix rotations; instead, large conformational changes were restricted to extracellular and intracellular loops (35). A comparison of the recent crystal structures of ABC transporters suggest that differences between outward and inward facing conformations of ABC transporters may be limited and can be accommodated with little changes in the overall architecture (36 -38). Although further analysis of other transmembrane segments is required to completely characterize the conformation changes in CFTR, these results may provide insights into the conformational changes and transport cycle of other ABC transporters. | 7,579.6 | 2008-02-22T00:00:00.000 | [
"Biology"
] |
Intelligent Solutions in Chest Abnormality Detection Based on YOLOv5 and ResNet50
Computer-aided diagnosis (CAD) has nearly fifty years of history and has assisted many clinicians in the diagnosis. With the development of technology, recently, researches use the deep learning method to get high accuracy results in the CAD system. With CAD, the computer output can be used as a second choice for radiologists and contribute to doctors doing the final right decisions. Chest abnormality detection is a classic detection and classification problem; researchers need to classify common thoracic lung diseases and localize critical findings. For the detection problem, there are two deep learning methods: one-stage method and two-stage method. In our paper, we introduce and analyze some representative model, such as RCNN, SSD, and YOLO series. In order to better solve the problem of chest abnormality detection, we proposed a new model based on YOLOv5 and ResNet50. YOLOv5 is the latest YOLO series, which is more flexible than the one-stage detection algorithms before. The function of YOLOv5 in our paper is to localize the abnormality region. On the other hand, we use ResNet, avoiding gradient explosion problems in deep learning for classification. And we filter the result we got from YOLOv5 and ResNet. If ResNet recognizes that the image is not abnormal, the YOLOv5 detection result is discarded. The dataset is collected via VinBigData’s web-based platform, VinLab. We train our model on the dataset using Pytorch frame and use the mAP, precision, and F 1-score as the metrics to evaluate our model’s performance. In the progress of experiments, our method achieves superior performance over the other classical approaches on the same dataset. The experiments show that YOLOv5’s mAP is 0.010, 0.020, 0.023 higher than those of YOLOv5, Fast RCNN, and EfficientDet. In addition, in the dimension of precision, our model also performs better than other models. The precision of our model is 0.512, which is 0.018, 0.027, 0.033 higher than YOLOv5, Fast RCNN, and EfficientDet.
Introduction
anks to the development of technology, unlike the traditional diagnosing methods, radiologists could diagnose and treat medical conditions using imaging techniques like CT and PET scans, MRIs, and, of course, X-rays when patients go to hospitals [1]. However, there are some medical misdiagnosis when radiologists, even for the best diagnosed clinicians, try to interpret the X-rays reports with the naked eyes [2].
To this end, due to the rapid development of imaging technology and computer computing power, a new research dimension was born, called a computer-aided diagnosis (CAD) system [3]. e system has been developed extensively within radiology and is one of the major research directions in medical imaging and diagnostic radiology. It has ability to solve serval issues [4]. Firstly, the system provides a chance for doctors to focus on high-risk cases instantly [5]. Secondly, it provides more information for radiologists to make the right diagnoses in a short time. Due to CAD, it is more efficient and effective in doctor diagnostic stage.
CAD system could be separated into two critical aspects: "detection" and "diagnosis" [6]. In the "detection" stage, the algorithm locates and segment the lesion region from the normal tissue, which reduce the burden of observation for radiologists greatly. With the validated CAD results, as a second opinion, radiologists could combine them with his or her experience to make the final decisions [7]. Meanwhile, the "diagnosis" is defined as the technology to identify the potential diseases, which could be the second reference for radiologists [8]. Mostly, the "detection" and the "diagnosis" are associated with each other and they are based on the machine learning algorithms [9,10].
Machine learning methods in CAD system analyze the imaging data and develop models to match the relationship between input figures and output diseases using the imaging data from a patient population [11]. e methods on machine learning technology to analyze patient data obtain decision support that is applicable to any patient care process, such as disease or lesion detection, characterization, cancer staging, treatment planning, treatment response assessment, recurrence monitoring, and prognosis prediction [12]. Normally, imaging data plays an important role at every stage, so image analysis is the main component of CAD [13]. Furthermore, due to the success of deep learning [14,15] in many applications, such as target recognition and tracking, researchers are excited and have high hopes that deep learning can bring revolutionary changes to healthcare [16]. rough deep learning methods, the process of manual feature engineering can be reduced. For instance, in [17], the authors proposed a U-Net lymph node recognition model and the deep learning model outperforms the traditional algorithms like Mean Shift and Fuzzy C-means (FCM) algorithm. In [18], Xiaojie proposed a U-Net based method for Identification of Spinal Metastasis in Lung Cancer. In [19], the writers studied the value of wall F-FDG PET/Cr imaging and deep learning imaging in precise radiotherapy of thyroid cancer.
Likewise, in the paper, we also develop an application of disease detection which applying deep learning on CT images. Our task is to localize and classify 14 types of thoracic abnormalities from chest radiographs. Our contribution is to give a solution for automatic chest detection. In particular, we divide the detection method into two steps including a detection step and a classification step. e classification step is used to filter the result from the first step.
We describe the clinical diagnosis with computer in recent years and the history of computer version in Section 2. In Section 3, a new model was proposed, utilizing the algorithm YOLOv5 for detection and ResNet50 for classifying. e experiment process is shown in Section 4. And the final section is the conclusion of the whole article.
Related Work
In this part, we firstly introduce the definition of chest radiography and then roughly explain the development of CAD. At last, we described some algorithms on object detection which are used for our task about chest abnormality detection.
Chest Radiography.
Chest radiography is the most commonly used diagnostic imaging procedure [16]. More than 35 million chest radiographs are performed every year, and the average radiologist reads more than 100 chest radiographs per day just in the United States alone [20]. Because chest X-ray photography is a condensation of 2D projections composed of 3D anatomical information [21], reading and extracting key information require a lot of medical experience and knowledge. Although these tests are clinically useful, they are too costly [22]. Some radiologists lack professionalism or relevant experience; when the workload increases or the patient's condition is special, they will make some errors inadvertently and these errors will cause the doctor to misdiagnose [23]. To this end, there is an urgent need for a technology to help radiologists make decisions. Deep learning technology automatically detects and diagnoses the condition, which greatly helps radiologists and improves their efficiency and accuracy. And in some medical centers, it can support large-scale workflows and improve the efficiency of radiology departments.
e Development of Computer-Aided Diagnosis System.
CAD system has been in development for many years from the traditional machine learning methods to, now, deep learning. It is an unstoppable and imperative trend for using CAD system in clinical process. Paper [24] uses GoogleNet and image enhancement and pretraining on ImageNet to classify chest X-ray images with an accuracy of 100, which proves the concept of using deep learning on chest X-ray images. e author in [25] created a network based on a given query image and ranked other chest radiography images in the database based on the similarity to the query. With this model, clinicians can efficiently search for past cases to diagnose existing cases. In [26], CNNs detected specific diseases in chest radiography images and distributed disease labels. Research [27] used RNN to describe the context of annotated diseases based on CNN features and patient metadata. Recently, in [28], a CNN was designed to diagnose specific diseases by detecting and classifying lung nodules in CXR images with high accuracy.
Overview of Object Detection.
Recently, there are increasing applications about target detection [29]. ere are two mainstream types of algorithms: (1) Two-stage methods: for example, the representation is RCNN algorithms [30], which uses selective search firstly and then adds CNN network to generate a series of sparse candidate boxes and lastly classifies and regresses these candidate boxes. e biggest advantage of two-stage model is high accuracy. (2) One-stage methods: the representation is YOLO and SSD which is to realize an end-to-end model to get the final detection result directly [31,32]. ey conduct dense sampling at different positions of the picture. Different sales and aspect ratios can be used when sampling. CNN is used to extract features. e biggest advantage of one-stage methods is high speed. However, there are serval disadvantages. e accuracy is relatively lower than two-stage methods. And even and dense sampling is difficult for training, mainly because the positive samples and negative samples (background) are extremely unbalanced [32].
Two-Stage Methods
(1) RCNN. RCNN is the earliest model introducing CNN method into the target detection filed. After that, more and more models use CNN for target detection, which greatly improves the effect [33,34]. e traditional detection algorithms, using sliding windows methods to determine all possible regions in turn, are a complex and low-efficient work. RCNN improves the efficiency through selective search to preextract a series of candidate regions that are mostly likely to be objects. en RCNN could just focus on extracting features from these candidate regions for judgment. e process of RCNN is composed mainly of 4 steps [35]: (1) Candidate region generation: use the Selective Search method to generate 1 K∼2 K candidate regions from an image for the second step (2) Feature extraction: for each candidate region provided in the first step, a deep convolutional network is used to extract features (3) Category judgment: using SVM classifier, input the features provided in the second step into the classifier (4) Position refinement: use the regression to finely correct the position of the candidate frame However, RCNN has two disadvantages [36]. e first one is that the candidate box does not share a neural network and has many parameters. And the SVM classifier is too complicated.
(2) Fast RCNN. Fast RCNN has been improved in the following aspects compared to RCNN [37]: (1) Fast RCNN still uses selective search to select 2000 candidate boxes [38]. e original image is input into the convolutional network to obtain the feature map, and then the candidate box is used to extract the feature box from the feature map. Here, since the convolution is calculated only once for each position, the amount of calculation is greatly reduced. But Fast RCNN set different size candidate frames in the first step. ese need to be converted to the same size through the ROI pooling layer. (2) ere is no SVM classifier and regressor in Fast RCNN [39]. All the results about the position and size of the classification and prediction box are output through the convolutional neural network. In order to increase the calculation speed, the network finally uses SVD instead of the fully connected layer.
(3) Faster RCNN. Fast RCNN ignores the problem that the detection network can share calculations with the region suggestion method. erefore, Faster RCNN proposes a region proposal network from the perspective of improving the speed of region proposal to realize fast region proposal through GPU [40].
Using the RPN network instead of the selective search used by Fast RCNN to extract candidate regions is equivalent to Faster RCNN � RPN + Fast RCNN, and RPN and Fast RCNN share convolutional layers [41].
Fast RCNN has the following characteristics [42]: (1) Multiscale targets: use RPN network candidate regions and use anchors of different sizes and aspect ratios to solve multiscale problems (2) Calculate the IOU of the intersection of anchors and the real frame and establish positive and negative samples through the threshold (3) Sample imbalance: randomly sample 256 anchors in each batch for border regression training and ensure that the numbers of positive and negative samples are the same as possible to avoid the problem of gradient rule caused by too many negative samples
One-Stage Methods
(1) YOLOv1. It is the pioneering work of one-stage target detection [43].
(1) Fast speed: compared with the two-step target detection method, YOLOv1 uses the end-to-end method, which is faster (2) Use global features for reasoning. Because of the use of global context information, compared with sliding window and suggestion box methods, the judgment of the background is more accurate (3) Generalization: the trained model still has good results in new fields or unexpected input situations (2) SSD. e core design concept of SSD is summarized into the following three points [44]: (1) Use multiscale feature maps for detection. SSD utilizes large-scale feature maps to detect smaller targets and vice versa. (2) Utilize convolution for detection. SSD directly uses convolution to extract detection results from different feature maps. (3) Set a priori box. SSD draws on the anchor of Faster RCNN and sets a priori boxes with different scales or aspect ratios for each unit. e predicted bounding boxes are based on these prior boxes, which reduces the difficulty of training to a certain extent. In general, each unit will set multiple a priori boxes, and their scales and aspect ratios are different.
SSD uses VGG16 as the basic model and then adds a new convolutional layer on the basis of VGG16 to obtain more feature maps for detection [45].
ere are five main advantages of SSD [46]: Journal of Healthcare Engineering (1) Real time: it is faster than YOlOv1, because the fully connected layer is removed (2) Labeling scheme: by predicting the category confidence and the deviation of the prior frame from the set of relative fixed scales, the influence of different scales on loss can be effectively balanced (3) Multiscale: multiscale target prediction is performed by using multiple feature maps and anchor frames corresponding to different scales (4) Data enhancement: data enhancement is performed by random cropping to improve the robustness of the model (5) Sample imbalance: through difficult sample mining, the a priori box with the highest confidence among negative samples is used for training, and the ratio of positive and negative samples is set to 1 : 3, which makes the model training converge faster Although the detection speed of YOLOv1 is fast, it is not as accurate as the RCNN detection method. YOLOv1 is not accurate enough in object localization and has a low recall rate [47]. YOLOv2 proposes several improvement strategies to improve the positioning accuracy and recall rate of the YOLO model, thereby improving mAP [48,49].
(1) Batch normalization: it greatly improves performance (2) Higher resolution classifier: it makes the pretraining classification task resolution consistent with the target detection resolution (3) Convolutional with anchor boxes: using a fully convolutional neural network to predict deviations instead of specific coordinates, the model is easier to converge (4) Dimension clusters: set the scale of the anchor frame through the clustering algorithm to obtain a better a priori frame and alleviate the impact of different scales on loss (5) Fine-grained features: integrate low-level image features through simple addition (6) Multiscale training: through the use of a full convolutional network, the model supports the input of multiple scale images and trains in turn (7) Construct Darknet-19 instead of VGG16 as backbone with better performance (1) Real time: compared with RetinaNet, YOLOv3 sacrifices detection accuracy and uses the Darknet backbone feature extraction network instead of ResNet101 to obtain faster detection speed (2) Multiscale: compared with YOLOv1-v2, the same FPN network as RetinaNet is used as an enhanced feature extraction network to obtain higher detection accuracy (3) Target overlap: by using logistic regression and twoclass cross-entropy loss function for category prediction, each candidate frame is classified with multiple labels to solve the possibility that a single detection frame may contain multiple targets at the same time (1) Real time: drawing lessons from the CSPNet network structure, the Darknet53 is improved to CSPDar-knet53 to make the model parameters and calculation time shorter (2) Multiscale: the neck separately introduces the PAN and SPP network structure as an enhanced feature extraction network, which can effectively multiscale features and has higher accuracy than the introduction of FPN network (3) Data enhancement: the introduction of Mosaic data enhancement can effectively reduce the impact of batch_size when using BN (4) Model training: IOU, GIoU, DIoU, and CIoU are used as the regression of the target frame, which has higher detection accuracy than the square difference loss used by YOLOv3
Methodology
In this section, we introduce our methods about chest abnormality detection. We use the 2-step method. e first step is to use some traditional target detection methods such as YOLOv5 to perform target detection. e second step is to use the image classifier to perform two classifications (whether there is an abnormality), and if it is recognized that the image is not abnormal, the detection result of YOLOv5 is discarded.
YOLOv5 for Detection.
e whole structure of YOLOv5 [53] is shown in Figure 1.
e YOLO family of models consists of three main architectural blocks: Backbone, Neck, and Head.
(i) YOLOv5 Backbone: it employs CSPDarknet as the backbone for feature extraction from images consisting of cross-stage partial networks (ii) YOLOv5 Neck: it uses PANet to generate a feature pyramids network to perform aggregation on the features and pass it to Head for prediction (iii) YOLOv5 Head: it has layers that generate predictions from the anchor boxes for object detection Apart from this, YOLOv5 uses the following choices for training [54]: (i) Activation and optimization: YOLOv5 uses leaky ReLU and sigmoid activation and SGD and ADAM as optimizer options (ii) Loss function: it uses binary cross-entropy with logits loss YOLOv5 has multiple varieties of pretrained models as we can see above. e difference between them is the tradeoff between the size of the model and inference time. e lightweight model version YOLOv5s is just 14 MB but not very accurate. On the other side of the spectrum, we have YOLOv5x whose size is 168 MB but is the most accurate version of its family [55].
Compared with YOLO series, YOLOv5 has serval lighting spots [56]: (1) Multiscale: use FPN to enhance the feature extraction network instead of PAN, making the model simpler and faster (2) Target overlap: use the rounding method to find nearby positions, so that the target is mapped to multiple central grid points around it
ResNet50 for Classification.
ResNet [57] is the abbreviation of Residual Network. It is one of the backbones in the classic computer vision task, which is widely used in the field of target classification. e classic ResNet includes ResNet50, ResNet101, and so on. e emergence of the ResNet network solves the problem of the network developing in a deeper direction without gradient explosion. As we know, deep convolutional neural networks are very good at identifying low, medium, and high-level features from images, and stacking more layers can usually provide us with better accuracy. e main component of ResNet is the residual module, as shown in Figure 2. e residual module consists of two dense layers and a skip connection. e activation function of each two dense layers is ReLU function.
e Whole Structure of Detection Model.
To solve the chest abnormality detection, we design a new hybrid model, which combined the YOLOv5 and ResNet50. After processing original images, we input them in YOLOv5 and ResNet50. And then we input them into the filter. e function of filter is mainly for removing the anomalies identified by YOLOv5 that cannot be classified by ResNet. e whole structure of our model is shown in Figure 3.
Experiments
In this section, we introduce the datasets we utilize and the performance metrics that are important for the research.
VinBigData's Image Datasets.
Our dataset was obtained from VinBigData, which is an organization promoting basic research and research on novel and highly applicable technologies. VinBigData's medical imaging team conducts research on collecting, processing, analyzing, and understanding medical data. ey are committed to building large-scale, high-precision medical imaging solutions based on the latest advances in artificial intelligence to promote effective clinical workflows. e process of building VinDr-CXR dataset is three steps [58]: (1) Data collection: when patients undergo chest radiographic examination, medical institutions could collect raw images in DICOM format and then images get deidentified to protect patient's privacy. (2) Data filtering: because not all images are valid, it is necessary to filter raw images. For example, images of other modalities, other body parts, low quality, or incorrect orientation all need to be filtered out by a classifier based on machine learning. (3) Data labeling: develop a web-based markup tool, VinLab, to store, manage, and remotely annotate DICOM data.
And we use 15000 scans as the train dataset, and the other 3000 scans as the test dataset. In addition, our train.csv is the train set metadata, with one row for each object, including a class and a bounding box. It contains 8 columns, and they are the unique image identifier, the name of the class of detected object, the ID of the class of detected object, the ID of the radiologist that made of the observation, and the minimum coordinate of object's bounding box, respectively. Some images in both test and train have multiple objects. Figures 4 and 5 show the example of input images and output images, respectively.
Evaluation Metrics.
In this section, we describe some evaluation metrics used in our experiment. It is known to us that, in the CAD system, the main part is detection. Common metrics for measuring the performance of classification algorithms include accuracy, sensitivity (recall), specificity, precision, F-score, ROC curve, log loss, IOU [59], overlapping error, boundary-based evaluation, and the dice similarity coefficient. e metrics we used is the mean Average Precision (mAP) [60], the precision, and F1-score. We will briefly introduce them in the following part.
According to the theory of statistical machine learning, precision is a two-category statistical indicator whose formula is and the formula of recall is Furthermore, it is necessary to define TP, FP, and FN in the detection task.
B_gt represents the actual ground frame (Ground Truth, GT) of the target, and B_p represents the predicted frame. By calculating the IOU of the two, it can be judged whether the predicted detection frame meets the conditions. e IOU is shown with pictures as follows.
en after knowing this knowledge, we introduce the mAP. AP is to calculate the area under the P-R curve of a certain type, and mAP is to calculate the average of the area under the P-R curve of all types.
F1-score is defined as the harmonic average of precision and recall:
Experiment's Result and Analysis.
We draw a histogram of class distribution to indicate our dataset clearly in Figure 6. It is clear that class of "no finding" is the largest proportion. And classes 0, 3, 11, and 13 have a higher proportion, which corresponds to aortic enlargement, cardiomegaly, pleural thickening, and pulmonary fibrosis, respectively. Meanwhile, classes 1 and 12 have a lower proportion, which corresponds to atelectasis and pneumothorax.
And Figure 7 shows F1 indicator training process for each category. It is obvious that the F1-score tends to 0 with the increasing of confidence. From the figure, we can get that the earliest towards 0 is the class of consolidation at the In addition, to evaluate the performance of our proposed model, we select some previous classical models and compare them using the same dataset and evaluation metrics. We compare them in the metrics of map and precision. We have introduced the definition aforementioned. e classical models we choose are YOLOv5, Fast RCNN, and Efficient. Table 2 shows the experimental results of competing models. In the dimension of mAP (the IOU threshold of the predicted border and ground truth is 0.6); it is evident that the model we proposed has the best performance, which is 0.254 and which is 0.010, 0.020, and 0.023 higher than YOLOv5, Fast RCNN, and EfficientDet. Meanwhile, in the dimension of precision, our model also performs better than other models. e precision of our model is 0.512, which is 0.018, 0.027, and 0.033 higher than YOLOv5, Fast RCNN, and EfficientDet.
Conclusions
e motivation of our work is to develop a system to automatically detect chest abnormality using deep learning techniques. Our work can help doctors to improve their diagnosis and make a faster decision. In the introduction, the background of computer-aided diagnosis (CAD) is stated and some related works are covered. In the ending of the introduction section, our method is proposed. e detection method contains two steps. e first step is using object detection algorithms like YOLO and EfficientDet to find the location (the bounding box) from the CT scan images. e high possibility result is the one which has a confidence greater than a previous set score. e second step is using a binary CNN classifier like ResNet to remove the abnormal images which are generated from the first step. In the first step, we mention classical detection neural networks like RCNN, Fast RCNN, Faster RCNN, SSD, and YOLO series. e structures and some characteristics of these models are carefully described. In the experiment section, the VinBig Dataset is firstly introduced. e training parameters of models, evaluation metrics, and the figures of training process are also given for a repeating experiment. Table 2 shows the performance of YOLOv5, Fast RCNN, and Effi-cientDet and our proposed method. It is obvious that the two-step method (YOLOv5 + ResNet50) is better than the method only using detection (YOLOv5, Fast RCNN, and EfficientDet), which means our method has the best performance.
Data Availability
All data used to support the findings of this study are included within the article.
Disclosure
Yu Luo and Yifan Zhang are co-first authors.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 6,021.4 | 2021-10-13T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Water-Induced Tuning of the Emission of Polyaniline LEDs within the NIR to Vis Range
Tuning of the emission within the near-infrared to visible range is observed in p-toluenesulfonic acid-doped polyaniline light emitting diodes (PANI/PTSA), when water molecules are absorbed by the active material (wet PANI/PTSA). This is a hybrid material that combines a conjugated π-electron system and a proton system, both strongly interacting in close contact with each other. The proton system successfully competes with the electron system in excitation energy consumption (when electrically powered), thanks to the inductive resonance energy transfer from electrons to protons in wet PANI/PTSA at the energy levels of combination of vibrations and overtones in water, with subsequent light emission. Wet PANI/PTSA, in which electrons and protons can be excited parallelly owing to fast energy transfer, may emit light in different ranges (on a competitive basis). This results in intense light emission with a maximum at 750 nm (and the spectrum very similar to that of an excited protonic system in water), which is blue-shifted compared to the initial one at ∼850 nm that is generated by the PANI/PTSA dry sample, when electrically powered.
■ INTRODUCTION
Electron transfer, proton transfer, and excitation energy transfer from electrons to protons with excitation of the proton system are of great importance for key biological processes and modern advanced technological applications 1−19 as both intramolecular processes 2,3,6,9,12,17,21 and external, intermolecular, and intersystem ones. 3,8,[11][12][13][14]17,19,21 The last case is particularly interesting in connection to technical applications, 11,14,18 but they also have a crucial role in functioning biological systems at cellular and sub cellular levels. 1,3,7,8,13,14,18 Such complex problems and systems are very often successfully examined with experimental and theoretical models, supported by computer simulations. [5][6][7]9,17,19 A unique model material for studying some aspects of these processes by electroluminescence 20,21 is p-toluenesulfonic aciddoped polyaniline (PANI/PTSA). This is due to the presence of a conjugated π-electron system with relatively high electrical conductivity and an interacting with it coupled hydrogen bonding system, which can be modified by the presence of water molecules (Figure 3a).
The emission of light by conductive polymers that are electrically powered has been of our interest for over 10 years using macroscopic samples (∼1 mm) in experiments, instead of thin layers with a thickness of ∼1 μm, unlike in other laboratories. 20 We described polyaniline light-emitting diodes (LEDs) with non-linear effects, including stimulated Raman scattering 21 and polyaniline lasing, 22 and also the electroluminescence of polypyrrole. 23 On the other hand, we have discovered emission of light in the entire range of ultraviolet−visible−near-infrared (UV− vis−NIR) due to the excitation of protons in the protonic analogue of the p−n junctionprotonic LED, 24 formed in water as a protonic semiconductor, appropriately doped. 25,26 In this paper, we describe the unique light emission observed in polyaniline doped with p-toluenesulfonic acid (PANI/ PTSA), which is modified (tuned) in the range from 850 nm (NIR) to 750 nm (vis) in the presence of water.
The emitter is a hybrid material that combines conjugated πelectrons and a coupled proton system, both strongly interact with each other, while remaining in close contact, so that electrons and protons can be simultaneously excited due to energy transfer, when the system is electrically powered.
■ RESULTS AND DISCUSSION
This work is devoted to unique properties of polyaniline doped with p-toluenesulfonic acid, PANI/PTSA, a hybrid model material, where electrons and protons are excited when electrically powered, emitting the light in different ranges in a competitive way. The curiosity is that the protonic system starts to be active (effective in emission) in the presence of water and it is competitive despite emitting the light of higher photon energy (blue-shifted). The final result resembles a photon upconversionNIR emission transits into the vis rangebut the mechanism is different.
The diode formed with dry polyaniline doped with ptoluenesulfonic acid (solid pressed pellet, with a thickness of 0.5 ± 0.01 mm and a diameter of 3 mm) emits mostly in NIR with a maximum at 840−885 nm (Figures 1 and 2a). This corresponds to the excitations of π-electrons and emission due to the charge-transfer (CT) processes in organic materials. 20,27−29 Here, CT between PTSA and polyaniline and also between quinoid and aromatic moieties in polyaniline chains is to be considered, including the formation of polarons that are weakly emissive. 30 PTSA can interact with PANI in two ways: • due to Coulomb forces of negative and positive charges of anionic −SO 3 − and cationic −N−H + groups, respectively (Figure 3a, TPSA2 and TPSA3); • due to non-polar forces, originating from interactions of π-electron aromatic rings (Figure 3a, TPSA1), which are particularly adequate for the CT process; in addition, these are responsible for lowering the energy of the electron excited state and the energy of photons generated, leading to emission in the NIR region with a maximum at 845−885 nm; electroluminescence of PANI/HCl, polyaniline doped with HClwith no πelectron interactions, is observed as broad bands at 460, 575, and 657 nm, with a maximum at 575 nm. 21 Influence of Water. With the increase in the water content (humidity of the active materialpolyaniline doped with ptoluenesulfonic acid), a shift of the maximum emission toward the blue and an increase in the light intensity in a part of the spectrum around 750 nm are observed, while the emission intensity at 845−885 nm decreases clearly (Figures 1 and 2a,b).
Generally, the high dielectric constant of water should result in a bathochromic shift of the emission, 31 which is not observed. On the other hand, the observed blue shift of 100 nm is too large to be considered a typical hypsochromic effect related to the protonation of nitrogen atom n-electrons due to polyaniline hydration. In addition, there is no pure n−π transition identified in polyaniline, and the spectral range considered corresponds to CT and polaron bands. 31,32 This indicates another mechanisman effective transfer of the excitation energy (originally provided by the electric current) from the electron system of polyaniline into the protonic one. 3 The energy transfer to the coupled protonic system 37−39 is fast and effective enough so that the light emission from an excited protonic system 24 takes place as a competitive process, which is similar in mechanism to the generation of polaritons owing to strong coupling, when the resonant energy exchange between a confined optical mode and a material transition is faster than any decay process. 44 In consequence, the loss of energy is lower than in the case of the excited polyaniline electronic states. This results in a blue-shifted spectrum and more intensive light emission. In wet PANI/PTSA, owing to inductive resonance energy transfer (IRET) from electrons to protons, the provided electrical energy excites the protonic system up to the energy levels of combination vibrations and overtones in water-coupled hydrogen bonding 3,24 with subsequent emission of light at 750 nm. This is a unique behavior. Usually, the hydrogen bonding system is responsible for the dissipation of the excitation energy due to the rapid energy transfer between the electron and proton systems, followed by the energy flow in the hydrogen bonding network, and consequently the relaxation of electronic excitations through conjugated hydrogen bonds. 3,33−37 Dissipation and Transfer of the Excitation Energy. The light emission from the protonic system excited owing to IRET from electrons to protons is a new phenomenon, which dominates in experiments performed with wet PANI/PTSA (Figure 2a,b). The emission spectrum consists of two components: the contribution of the excited basic electron system of dry PANI/PTSA with a maximum at ∼850 nm and the excited protonic system (including water) at ∼750 nm in wet PANI/PTSA, which are additive in a competitive way (Figures 1 and2a,b). In both cases, despite the emission of light, the dissipation of the excitation energy also takes place in non-radiative processes. 3 This is particularly effective when the electron and proton systems are involved at a comparable level in consuming the excitation energy, that is, for the emission amplitude ratio of A 850nm /A 750nm equal to 1 (Figure 2b). The proton system is not yet ready and effective enough in emission, but both channels dissipate the energy, leading to the lowest light emission.
Generally, in polyaniline, there is a strong coupling between electrons and protons, including the protons of the absorbed water. 31 Protonation essentially influences the PANI π-electron system and the electrical conductivity (e.g., emeraldine salt, Figure 3b). This enables efficient energy transfer and excitation (Figure 4). The optical energy gap corresponding to the vis−NIR emission spectrum of dry PANI/PTSA is higher and amounts to 1.934 eV. The pernigraniline fraction (Figure 3c) with a lower conductivity and a band gap of about 2 eV is expected to exist as domains that act as emission centers. The domain structure of polyaniline has been described previously. 42,43 In our experiments, the high electrical conductivity of the polyaniline matrix is necessary to achieve the threshold emission current of 7.7 A at 3.04 V, followed by the operating current of 18 A (minimum) at 3.84 V ( Figure 5). Switching ON the emission is rapid and reversible with a voltage change of 0.8 V. The PANI/PTSA electrical conductivity varies from 0.7 to 1.0 S/cm (at "OFF") to 3.7−3.8 S/cm when emitting light. The process is repeatable and reproducible.
Stability of the Active Material. Despite sudden changes in the current (and the electrical conductivity, e.g., 0.76, 3.8, and 1.05 S/cmbefore, during, and after emission, respectively), the material stays relatively stable with respect to its electronic structure and electrical properties ( Figure 5), including the characteristic parameters measured using the electron paramagnetic resonance (EPR). The EPR signals are very strong for all samples examined (Figure 6), which indicate a high concentration of polarons (dominating charge carriers in polyaniline). Each of the spectra consists of a single narrow Lorentzian line, with the asymmetric factor between 1.01 (dry PANI/PTSA) and 1.16 (wet PANI/PTSA). The values of gfactor lie in a narrow range from 2.00291 (wet PANI/PTSA) Tuning the Emission. Interestingly, the fast energy transfer from the electronic system to the proton system 3 in the wet PANI/PTSA (Figure 1) limits non-radiative energy dissipation. Polarons and bipolarons, generated in a polyaniline π-electron system ( Figure 6) are non-light-emitting quasiparticles, 32 and energy can be dissipated by molecular vibration. On the other hand, excitation of a protonic system is efficient in light emission, as previously observed. 24 There is a convincing similarity between the electroluminescence spectra of sulfonated polystyrene doped water 24 and the protonic system contribution at 750 nm in the current wet PANI/PTSA experiments ( Figure 7). Thus, the process of exciting a protonic system and then emitting light is efficient and effectively competitive with the non-radiative energy dissipation from the excited electronic system of polyaniline.
Due to the efficient electron-to-proton energy transfer and the excitation of the protonic system, less dissipation of the supplied energy leads to the emission of photons of greater energy than an excited polyaniline π-electron system, when electrically powered. The difference in the energy of the photons at 750 and 850 nm is ∼0.195 eV (0.19448 eV), which corresponds to the excitation of molecular vibrations in polyaniline, for example, aromatic rings and quinoid grouping at ∼1570 cm −1 . 38−40 They have a strong influence on the electron energy (they partially absorb it and dissipate it) but are not involved in the excitation of the proton system, so the energy is not dissipated in this way after a fast transfer ( Figure 1), eventually leading to a blue shift in the emission spectrum in the presence of water (wet PANI/PTSA).
Changes in the emission spectrum (estimated roughly by the amplitude ratio at 850 and 750 nm) depend on the sample moisture ( Figure 2). In this way, the emission spectrum can be modified by gradually shifting the emission from the NIR to the vis range.
This process is expected to be particularly effective when using micro-and nanostructured light-emitting materials, as in our case (Figure 8c), due to the easy diffusion and good contact between water molecules and the active material here, PANI macromolecules. The effect depends on the ability of the proton system (in water) to interact with the electron system, which is more effective for a micro-and nanostructured material.
■ CONCLUSIONS
Fast transfer of excitation energy from π-electrons to protons in conjugated hydrogen bonds and effective excitation of the proton system in a hybrid material with strongly coupled electron and proton systems (wet PANI/PTSA) lead to emission of photons with higher energy than the π-electron system in dry PANI/PTSA, when the sample is electrically powered. The water proton system, incorporated into the wet PANI/PTSA, effectively competes with the electron system in terms of excitation energy consumption, resulting in blueshifted light emission due to lower energy dissipation. As the amount of water absorbed increases, the initial infrared emission at ∼850 nm (NIR) from the dry PANI/PTSA gradually shifts toward the vis range to reach ∼750 nm for the wet PANI/PTSA.
Apart from the possibility of tuning the emission spectrum, the fast and effective transfer of the excitation energy from the electron system to the proton system at the level of the overtones of fundamental oscillations in water molecules (regardless of the source of the excitation energy) is very important for basic biological processes 3 and also for some modern technical applications, for example, water splitting for fuel production. 3 This makes our results potentially even wider. ■ METHODOLOGY Materials and Methods. Polyaniline was prepared by oxidation of aniline hydrochloride (10% in water at pH about 1) with the chemical method described elsewhere, modified in our laboratory.
The polymeric material (polyaniline) was characterized with physical and chemical methods: FTIR, EPR, elemental analysis, and electrical conductivity measurements, giving results similar to the values measured for materials previously prepared in our laboratory. 21,22,45 Chemicals and solvents: Hydrochloric acid (HCl) (Stanlab, pure p.a.)600 mL of 1 M solution prepared from concentrated hydrochloric acid (36%)50 mL of 36% HCl dissolved in 550 mL of H 2 O.
Aniline hydrochloride (C 6 H 8 NCl) (Fisher Scientific, pure p.a.)6.9 g of aniline hydrochloride dissolved in 300 mL of 1 M hydrochloric acid. Polyaniline Emeraldine Base, PANIEB. Aniline hydrochloride (6.9 g) was dissolved in 300 mL of 1 M hydrochloric acid. At the same time, 11.4 g of ammonium persulfate was dissolved in 200 mL of 1 M hydrochloric acid (separately).
The ammonium persulfate solution was slowly added to a solution of aniline hydrochloride. The resulting dark-green solution was stirred with a magnetic stirrer at room temperature for 24 h. The precipitate was filtered under reduced pressure and washed several times with distilled water until the filtrate was nearly colorless and neutral. The solid product was treated with 500 mL of 1 M ammonium hydroxide and stirred with a magnetic stirrer at room temperature for 20 h. FTIR spectra were measured as a suspension in KBr, pressed disc; for example, Figure 4 (Bbenzenoid rings; Qquinoid rings).
Elemental analyses were performed using a model Vario EL III elemental analyzer (Elementar Analysensysteme GmbH, Germany).
FTIR spectra were recorded using the spectrometer model IFS 66/s (Bruker, USA).
The EPR spectra were recorded using an EPR spectrometer model SE/X 2547 (RADIOPAN, Poland).
The emission UV−vis−NIR spectra were registered on-line with a Spectrometer model 2000 Ocean Optics PC2000 at a resolution of 0.5 nm at the same time when the current and the voltage were measured.
The beam profile was pictured directly with a digital camera: Pentax Kr 12.4 MP + 18−55 mm lens.
The measurements were performed under ambient conditions and in a dark room.
The samples, formed at a pressure up to 6000 kG/cm 2 as pellets with a thickness of 0.4−0.5 ± 0.01 mm and a diameter of 3 mm (Figure 8a), were placed in a measuring holder inside a glass tube with a wall thickness of 1−2 mm between two solid copper electrodes with a diameter of 4.5 mm (or one of 4.5 mm and the second of 3 mm) and a length of 25 mm. 21−23 Despite being pressed, the material is microporous (Figure 8b), which allows it to absorb water in the case of preparation of wet PANI/PTSA samples by direct contact with distilled water within 24 h or less.
The voltage and the current were measured with an accuracy of at least of 0.1% using a Brymen digital multimeter, model BM859s (computer controlled), and a Metrahit Energy multimeter (computer controlled) with a precision standard resistor of 0.001 Ω for current measurements, respectively. To simultaneously register the light beam and the emission spectrum, the optical fiber of the spectrometer was mounted on the same side as the photocamera, in most cases parallel to the optical axis of the camera (another configuration was also used). The distance between the sample and the aperture of the optical fiber was 1−3 cm, and the camera was located at a distance of 15−20 cm. 24
■ ACKNOWLEDGMENTS
We would like to thank the School of Science of the Adam Mickiewicz University in Poznańfor financial support under the grant for the Inter-Faculty Research Project and the Center for Advanced Technologies of the Adam Mickiewicz University in Poznańfor renting the laboratory space. | 3,990 | 2021-12-10T00:00:00.000 | [
"Physics"
] |
IDEAS for Transforming Higher Education: An Overview of Ongoing Trends and Challenges
The recent unexpected impact of the global pandemic on higher education has caused universities, governments, students, and teachers to reexamine all components of existing systems, including how to become more effective and efficient in using technologies for education. We have seen that moving classes online—either blended or fully online—can be done rapidly, but early reports show huge variations in quality, acceptance, completion, and learning. Thus, it is important to examine the existing research literature on pedagogical innovations and practices that use technologies. To understand this complex situation, the present study examines the current technological, organisational, and pedagogical trends and challenges using an exploratory design carried out in three stages. In stage one, a literature review of the academic and grey literature was conducted, identifying 14 trends of interest. These trends were used in a workshop and interview discussion between leading experts in the higher education field. Stage two focused on identifying 108 initiatives that represent these trends. Finally, 30 of these were selected as cases for further exploration in stage three. Using thematic analysis, the 30 cases were condensed into 12 main themes that represent the innovative practices that led to development of the IDEAS framework as a signpost on the roadmap of next-generation pedagogy for transforming higher education. IDEAS is presented in the discussion alongside examples and ways to apply it in higher education contexts. The IDEAS framework proposes a set of key strategic points regarding challenges and trends in the field and highlights the most The pandemic demonstrated that many strategically to provide quality education in times their classes IDEAS provides a response can help in a priori strategic and organisational planning as a robust method to In summary, this article provides an overview of the current changes, trends, and challenges at the centre of higher education’s transformation, highlighting pedagogical innovation supported by technology as a core aspect. The IDEAS framework is intended to be indicative rather than comprehensive and descriptive rather than prescriptive. It is proposed as a guide to identify crucial issues and to support decision making, organisational planning, and structural design, thus developing strategies for institutions to remain at the cutting edge of transformational higher education while addressing challenges and concerns. It can be used to spark reflective thinking, brainstorming, debate, and imaginative planning for future policies at the institutional and cross-institutional levels. Finally, it can be applied to further research in the various associated themes being investigated by providing focal points to develop and explore hypotheses.
Introduction
In a progressively networked society, educators are faced with countless possibilities for strategic and opportunistic expansion (Henderikx & Jansen, 2018). While benefiting our society with increased access to education (Baldwin & Ching, 2019) and innovative teaching methods (Walder, 2017), this highly interconnected world also presents many challenges, given the societal expectations put on institutions (Posselt et al., 2018). The government, students, and society expect universities to be innovative, affordable, and cost-effective to remain relevant and provide quality education (Damewood, 2016), highlighting an ongoing transformation that signals an "increased convergence of many concerns: pedagogy, professional training, [and] the transfer of knowledge" in higher education institutions (Ruano-Borbalan, 2019, p. 493). Managing the transformation presents challenges for educators and education administrators as new pedagogies and technologies continue to materialize, driving the need for effective strategic planning and decision-making processes that guide their implementation (Bennett et al., 2018).
Advances in technology drive the emergence of innovative pedagogies and practice that in turn generate a "digital disruption of education" (De Wit et al., 2015, p. 77), acting as a catalyst for the main developments in higher education (Haywood et al., 2015). These effects are found at institutions delivering distance and online education and traditional face-to-face-only universities that move towards greater use of technology and interactive methodologies, providing a combination of classroom experience with the convenience and flexibility of online provision, increasing student interaction and engagement (Phoong et al., 2019). Thus, technology supports traditional models of higher education as a transformative complementary tool (Goh et al., 2020).
However, what are the most effective pedagogical innovations implemented in digital learning environments? To respond, it is necessary to understand the core trends and challenges in higher education that could transform decision making about the future of education (DeVries, 2019) and identify practices that can be adopted in urgent and unprecedented situations, such as the pandemic, allowing universities to continue providing high-quality education (Bates, 2020). The COVID-19 pandemic highlights that disruptive pedagogical practices are implemented to respond immediately (Rapanta et al., 2020), but reports from early studies provide a mixed review of the effectiveness of emergency remote education (Bozkurt et al., 2020), which differs from the usual practices of distance and online education that benefit from extensive a priori strategic planning and organisation, thus impacting the quality of course design, development, and delivery (O'Keefe et al., 2020). Additionally, many teachers have little or no experience teaching in an online environment, and the rapid transition revealed a lack of expertise as an area in great need of further support going forward in the new normality (Johnson et al., 2020).
Trends in Higher Education
Trends provide a unique insight into the approaches that universities are taking to differentiate themselves in the fast-evolving educational environment, giving an overview of the state of the art of higher education (Westine et al., 2019). We operationalize trends as broad predominant directions in which higher education is developing and transforming.
Various reports identified in our study detail the current trends in higher education related to technology-enhanced teaching and learning, such as "The Changing Pedagogical Landscapes Study" (Henderikx & Jansen, 2018), which cites technology as a means to "solve problems higher education is facing today and … offer new opportunities for teaching and learning" (p. 3). The main trends reported include leadership and institutional strategy, gradual innovation at the course and curriculum levels, incentives for digital education, increased (scalable) continuous education and continuous professional development offerings, massive open online courses (MOOCs) as enablers for innovation, increasing internationalization of higher education, and the important role of governments. Moreover, institutions' capacity and resistance to implement technology were investigated, revealing that a lack of digital and media competences, absence of necessary institutional policies, and infrastructural limitations were the principal difficulties facing pedagogical innovation. The report suggests that blended learning methods are a trend driven by students' and teachers' digital skills, coupled with increased capability and reduced costs of the technology itself. Furthermore, the use of blended methods is recommended to complement, rather than replace, existing methods, as they improve quality while reaching a larger, more diverse population. Therefore, institutional policies and trends must adjust to the demand and be student-focused rather than teacher-focused forms of active learning.
Similarly, the Internationalization in Higher Education for Society publication (Brandenburg et al., 2020) addresses the crucial role that digital learning plays as a catalyst for the internationalization and mobility of both instructors and students. Its study references collaborative online international learning via technology-enabled virtual mobility as a key trend. Technological transformation is a vital factor in bridging the gap between universities and society, making the institutions more accessible to the wider public, including vulnerable communities, and it can extend education within the local society and beyond to national and international levels. Internationalization in higher education should focus on economic developmental models as well as taking into account factors such as economic growth, technology transfer and innovation (Brandenburg et al., 2020), reinforcing the importance of internationalization of digital learning, which in itself is considered a strategic issue in higher education development (De Wit et al., 2015).
The 2020 EDUCAUSE Horizon Report (Brown et al., 2020) focuses on five categories of trends: social, technological, economic, higher education, and political. Technological trends include advancements in artificial intelligence (AI), next-generation digital learning, and analytics and privacy questions. The authors discuss the economic impact of the trends, stating that institutions "will need to adjust their courses, curricula, and degree programmes to meet learners' needs as well as the demands of new industries and an evolving workforce" (p. 10). Technological advances respond to students' needs as they increasingly seek nontraditional routes to education, underlining that "higher education institutions are moving to new models for online programmes, such as assessment (competency) and crediting (micro-credentials and digital badging)" methods (p. 11).
Finally, it is necessary to reiterate that the COVID-19 pandemic has revealed a new major trend in that it has increased higher education's dependency on the use of technology for teaching and learning as emergency online courses have been implemented without the necessary time frame to prepare for this move (Hodges et al., 2020). Nonetheless, merely moving traditional-style classrooms online is not enough to deliver a consistent quality of education (Gasevic, 2020). The aforementioned trend reports argue that technology provides a viable solution to designing and supporting more flexible educational models, which are adaptable to educational, social, and economic needs as they arise. As such, the current debate on the future of the university system questions the foundations of the institution as it is compelled to adapt to a social context where technology plays a predominant role. This does not mean that existing teaching models should be replaced but rather that universities use technological advances to enhance traditional forms of pedagogy expanding the pedagogical possibilities thanks to the affordances of technology (Wick & Lumpe, 2015).
Challenges
Despite the advantages associated with pedagogical innovations supported by technology, challenges exist. Technology has been implemented slower than expected (Marshall, 2018). Although universities strive to remain innovative, the use of technologies, societal digitalization, and societal and economic limitations highlight the difficulties they face (Posselt et al., 2018). These impact teachers, learners, and decision making in terms of structural and content design (Ponomarenko et al., 2019).
Barriers to technology implementation include how the role of the teacher changes once it is introduced (Bates, 2019). Understanding this role may explain the differences in technology use between novice and experienced teachers. In a 10-year longitudinal study, novice teachers in Sweden were more likely than their experienced counterparts to implement new technologies in their educational practices (Englund et al., 2017). Similarly, teachers' attitudes towards and beliefs about technology implementation were the strongest influences in its implementation at Dutch universities (Farjon et al., 2019). This represents an ongoing challenge as it is vital that all members of the institution adopt a prochange attitude to drive innovative pedagogical practices at the level of strategy and organisational planning (Bates, 2019). However, these attitudes are not always easily adopted: 93% of interviewed teachers in Australian universities have identified teacher resistance to technology implementation as a core barrier (Watty et al., 2016).
Moreover, choosing the most appropriate tools and learning activities for teachers' and learners' needs is a time-consuming process (Bates, 2019). The rapid pace at which technological innovations are introduced often eclipse teachers' capacity to gain successful competence prior to use (Sutton & DeSantis, 2017). Academic staff must acquire an advanced level of digital and technological competency (Gillett-Swan, 2017). Therefore, collaborative learning via e-learning platforms and social networks, as well as online virtual collaboration between teachers, is needed (Romero-Moreno, 2019). Nevertheless, teachers differ in their opinions as to how technology can and should be used. Jääskelä et al. (2017) identify the following teachers' belief groups about digital learning: "[It is] a pivotal tool for self-paced learning, an additional tool for active and interactive learning, a tool designed for the integration and assessment of learning, and a tool for changing the learning culture" (p. 202). Moreover, a growing need for teachers' digital competence (DC) in higher education requires ongoing support in digital teaching methods (Amhag et al., 2019).
Further challenges upon implementation of the technology are seen in credentialization as the use of digital badge programmes includes usability issues, increased faculty workload, and a lack of information about their introduction (Stefaniak & Carey, 2019). Both learning analytics and AI face challenges due to the lack of theoretical background and evidence-based business models promoting their use (Renz & Hilbig, 2020). These limitations are linked to challenges that institutions face related to technological infrastructure, hardware, and software (Ponomarenko et al., 2019), as teachers require information to implement organisational and strategic developments with innovative pedagogies.
Finally, from the learner perspective, the digital divide remains in terms of access to technology (De Wit et al., 2015). At the international and internal levels, the digital divide is a cause for concern in Europe in those countries considered to be digital leaders (Cruz-Jesus et al., 2016). Consequently, equal emphasis on digital skills and infrastructure development of digital platforms is required to help institutions meet their students' needs in relation to the digital divide (Chetty et al., 2018).
Purpose of the Present Study
Based on a prior study by Guàrdia and Maina (2018), our research explores technology use as a driver of innovative change in higher education and its associated trends, challenges, and pedagogical practices facing the new, uncertain pandemic scenario. We aim to identify broad themes in innovative practices and the institutional initiatives that exemplify sound educational practices. The results are organised into a structured framework that will offer insight to universities that want to modernize and benefit from technologies and that will aid policy makers and university directors in their decisionmaking strategies in relation to innovation in higher education. Finally, we hope to spark debate about the future of universities given that in these unprecedented times, the future is now.
Research Design
Our study unfolds in three consequent phases as per the thematic synthesis guidelines of Thomas and Harden (2008). Adopting this approach facilitates the coding of findings, as well as the selection of descriptive themes, which in turn supports the development of the presented framework by providing a clearer understanding of the identified themes and initiatives (Vryonides et al., 2014). An overview of the research phases of this article is provided in Figure 1.
Overview of Research Stages
Stage one involved desk research of the academic and grey literature related to broad themes in innovative practices and original approaches (see Table 1) to teaching in higher education from 2015 to 2020. Prior to the search, the key themes were identified based on relevant academic reports, documented practices, and the researchers' professional expertise in innovative practices using information and communications technology. Projects financed in competitive calls were used as a reference to indicate the trends prioritized in education. We began our search using the term innovative practices, chosen for its specificity to the objective of this research. Using the Boolean operator AND, the innovative practices search term was linked to search terms related to higher education and technology, including higher education OR technology OR original approaches OR educational practices. Backward snowballing was used to expand the number of sources found within the reference lists of the reviewed literature to identify additional papers (Wohlin, 2014). The citation chaining option in Google Scholar was then used to further the literature search as it offers suitable coverage for systematic reviews (Gehanno et al., 2013) and is scholarly in terms of accuracy, authority, objectivity, currency, inclusion, and relevance (Howland et al., 2008).
A focus group of leading experts from the field of online education and technology (n = 35) was then held using the Future Scenarios workshop approach (Brown & Boorman, 2009). Working in small groups, participants brainstormed different scenarios for the future of universities, which contributed to the ongoing discussion about future possibilities, opportunities, and challenges in the sector. Finally, nine experts chosen for their expertise and research in online education and learning technologies in different geographic areas of the world were interviewed individually based on the results of the focus group, using the blue ocean approach to strategic planning (Kim & Mauborgne, 2014), which helps organisations discover their own unique selling point to differentiate themselves from similar competitors, thus offering more innovative and sustainable products. The interviews focused on questions regarding which kind of educational model could be devised to help the the experts create their own blue ocean for learning and teaching for institutions, both now and in the future, in terms of pedagogical innovations that do not imitate competitors, which were included in the experts' list of recommendations, highlighting different examples of the suggested innovations.
In stage two, examples of the trends and themes from the first phase were explored. A total of 108 initiatives in online, blended, and lifelong learning from institutions in Europe, the United States, Canada, and Australia were identified. The consulted reports, articles, and various sources discovered in the stage one literature search helped to identify, prioritize, and categorize many of the initiatives chosen. Coding was applied to these initiatives by two external researchers. The first researcher coded all 108 initiatives, and the second reviewed the 30 most relevant initiatives named by the first coder.
One point was allocated to any initiative representing an example of either an educational current practice (max. points: 14); a challenge, concern, or area of interest for higher education (max. points: 12); or an innovative or original approach to teaching in higher education (max. points: 34). Initiatives that could not be transferred to an online environment were excluded; those remaining were captured in a spreadsheet, mapping the initiatives with their descriptions, country or region of impact, the main source of information, notes, and rating in relation to the coding criteria (see Table 2).
In stage three, 30 of the 108 initiatives with the highest ratings in the coding framework were chosen as cases for further in-depth exploration based on their transferability, key aspects, probability of bringing about significant change in higher education, and present innovations and trends identified in the literature. The main objective was to propose significant attributes to consider for next-generation pedagogy to inform institutional strategic planning for the future (see Tables 3 and 4).
The output of trends and themes were reorganised into a comprehensive set of attributes of nextgeneration pedagogy using the technology, organisation, and pedagogy model's rationale for data reduction (Sangrà et al., 2009), which matches technology, organisation, and pedagogy in e-learning.
Results and Discussion
A total of 14 trends (Table 1) emerged from stage one and were classified according to the three broad themes of interest: online learning and teaching, blended learning and teaching, and lifelong learning. In the stage two workshop, participants devised a list of 34 innovative approaches to teaching in higher education (Table 2), which influenced the selection of cases for further exploration, informed the final selection of cases, and oriented the content to interview the experts. Artificial intelligence 3.
Community of inquiry 7.
Community of interest 8.
Community of practice 9.
Internet of things 18.
Learning and performance support systems 19.
Personal learning environment 22.
Project-and problem-based learning 24.
Recognition of open nonformal learning 25.
Smart learning environment 30.
Social networking for education 31.
Structure opposed to flexibility 32.
Virtual worlds A thematic analysis-a qualitative method credited as being accessible and theoretically flexible (Braun & Clarke, 2006)-was applied to the 30 cases selected for further exploration. Twelve overarching themes in innovative practices and original approaches to teaching emerged from the selected institutional initiatives (Table 3). Based on the interviews and investigation of innovative approaches and thematic analysis, the data set was reduced to reveal the IDEAS framework, developed as a signpost on the roadmap of next-generation pedagogy, alongside the landmark practices for each one (Table 4). The acronym comprises the five key characteristics of innovative, next-generation pedagogy: Intelligent, Distributed, Engaging, Agile, and Situated. Intelligent pedagogy involves the use of technology such as learning analytics to enhance the learning experience. Learning analytics helps to identify students who are offtrack with their studies, update them with live progress reports, identify popular learning materials and methods to adapt coursework according to individual learners needs, and replace the learning management system with student data management, human resources management, and/or financial management. Teaching DC is another landmark practice to consider as a student learning outcome, as well as providing DC training and development for staff, and establishing a DC institutional culture. Also, taking learning and teaching beyond the institutional learning platform is vital and can be achieved by encouraging students to be curators/creators of online platforms relevant to the course content, creation, and/or participation in virtual/collaborative project work platforms for students and staff-so they can work with professionals and community members outside of the institution-and by ensuring that software architecture incorporates a range of educational applications (tools, systems, content). These practices encourage active learning by increasing student autonomy and the creative use of emerging technologies such as remote labs or augmented and virtual reality that enhance learners' educational experiences.
Additionally, the use of mobile device apps that support learning via student input and collaboration could be implemented.
Distributed pedagogy is related to the shared ownership of aspects of the learner's journey by various stakeholders in the process. It includes collaborative alliances between institutions and a deliberate disaggregation of services to let learners choose their learning experience from a competitive marketplace, demonstrating that a university education no longer depends on institutions to provide learning materials, teaching, and accreditation, as they have more freedom in the services they provide.
Thus, increasing focus on strategic partnerships through collaboration, building curricula and credentials with employers/employer bodies, tailoring programmes to enhance students' employability and support innovation in the industry, and partnering with agencies that can provide specific services for more flexibility such as 24-hour academic support for students is needed. Further elements of this practice include open access to courses and course materials alongside assessment and formal credits for successful demonstration of learning outcomes, institutional collaboration to recognize credits obtained via open/nonformal learning, offers of challenge exams and recognition of prior learning, and learners' empowerment to receive formal credit for learning from a variety of formal and informal sources. Finally, involving a wider community of interest in research and teaching activities such as projects that students can work on with professionals and interested members of the public to address problems of wider interest to society is recommended.
Engaging pedagogy encompasses students' desires to be engaged by what they are learning. Examples of effective practices for this characteristic include strategic design for active learning-which implies learners having a more active role in content generation-active use of technology for learning, learnerbuilt portfolios, appropriate use of gamification, and learners' encouragement to proactively seek and use teacher, peer, and wider academic community feedback. Both a reduced focus on content and an increased focus on learning are also needed. Reducing the course content and replacing it with learnerfocused content-that is, learners find and evaluate information and apply it to real-life contexts and approaches to learning that include problem solving and project work in teams-are recommended. It is crucial to support teaching staff in creating engaging pedagogy by encouraging them to find, select, reflect on, and participate in learning activities that match their levels of expertise; offering teaching enhancement programmes that fit easily into their workload; and ensuring recognition for continuing professional development (e.g., digital badges/micro-credentials).
Agile pedagogy addresses the need for flexibility and responsiveness to learners' needs. Facilitating personalisation and flexibility can be achieved by modularizing degree programmes as stand-alone blocks to be studied at home or at partner institutions, providing different entry points to degree programmes, eliminating preset deadlines and maintaining fixed schedules for assessment of learning, providing self-assessment tools so students can decide if a flexible programme suits them, proposing optimal course plans (learning pathway) with grade requirements and milestones, offering a variety of personalised assistance services such as online tutoring to support students, and tailoring communications and rapid responses to individual students' and teachers' needs, as well as tailoring access to learning resources, activities, and support to users.
Expanding the options for recognition of prior learning involves issuing micro-credentials-for example, nanodegrees, digital badges, or skill certificates endorsed by employers based on successful completion of assessment; showcasing digital badges/credits on students' online profiles; integrating eportfolios into students' personal learning environments; and awarding academic credits for evidence of prior learning. Moreover, the following actions are also recommended: widening participation by recruiting lifelong learners as opposed to traditional undergraduates; offering students money-saving and time-saving options or a subscription-based fee whereby students pay less based on the time taken to complete the programme or free online courses; offering resources as transition points/credits towards formal degree programmes; and encouraging sponsorship to support free access to personalised support, academic credits, and certificates of achievement to online employment-focused MOOCs.
Another aspect of agile pedagogy refers to promoting internationalization and student mobility through partnerships with other universities, inviting students to experience different pedagogies and perspectives by taking a course or a number of free credits to use on MOOCs or distance courses offered by other institutions.
Situated pedagogy refers to the real-world relevance of the curriculum and the contextualisation of the learning process in terms of learners' personal/professional goals. To contextualise learning activities, educators should ensure that teaching and assessment reflect authentic contexts, giving learners the opportunity to apply the knowledge they have learnt and partner with companies, community organisations, government institutions, and nongovernmental organisations to identify key job-related competencies and integrate career development opportunities into the curriculum. They can then create an online platform to facilitate the coordination, development, and documentation of real-world projects. Further practices include expanding work-related learning opportunities via virtual mobility and placements; providing internships and research projects for industry clients; integration of assessments that simulate on-the-job work into programmes, emphasising feedback over grades; incentivising student participation in business projects by paying for successful solutions; offering online access to job vacancies, employer lectures, international opportunities, networking events, career profiles, and CV building resources; enabling students to demonstrate their knowledge and capabilities to prospective employers via a video platform; encouraging alumni to share work-related experiences with current students; and providing mentoring and/or internships and embedding innovation and entrepreneurship knowledge and skills in the course content.
Finally, big issues in society should be addressed. Practices aimed at achieving this goal include the following: student-led entrepreneurial activities or research projects using input from the public/community partners on custom-built platforms; collaborations with nonprofit organisations that widen participation in higher education-for example, programmes targeted at the refugee community; and engagement in local and regional initiatives for environmental protection and sustainability.
Conclusions
Higher education is being pushed to undergo rapid change and transformation. All higher education institutions, educational leaders, and administrators are expected to remain up to date with technological trends and societal demands, while continuing to provide high-quality education. The aim of this study was to detect the key themes, concerns, and examples of pedagogical innovative practices that drive transformation in higher education. A review of the extant literature alongside experts' opinions and thematic analysis revealed the most crucial areas for discussion in terms of technology, organisation, and pedagogy, as captured by the IDEAS framework. Focusing on the core framed characteristics here identified could encourage innovation in curriculum design and permit institutions to demonstrate their strengths and unique pedagogical approaches that differentiate them in the context of globalized education.
Our research focused on pedagogical innovation supported by technology as a catalyst for nextgeneration pedagogy supported by studies that highlight the key role of technology in both development and change in higher education (Goh et al., 2020;Haywood et al., 2015;Westine et al., 2019). A prime example is learning analytics: research shows that they influence student outcomes, and in organizational terms are useful for student assessment (Marshall, 2018), and can improve learning practices in the learning process (Viberg et al., 2018).
Another relevant outcome from our research is the emphasis on the need to ensure DC for both staff and students. It is essential that staff are digitally competent in order not only to implement and adopt innovative pedagogical practices but also to promote pro-change attitudes. Research has suggested that negative attitudes to change in teaching methods can limit advances in their implementation (Bates, 2019;Englund et al., 2017;Watty et al., 2016). Therefore, the need to include DC as a core objective in institutional and organisational planning is emphasised as it can complement the implementation of the necessary pedagogical and technological changes detected in our study. For students, DC is linked to the increased focus on active learning and students as autonomous actors in choosing and defining their learning trajectory. A major factor behind this is the student who no longer depends solely on traditional learning resources to continue their personal and professional development (Henderikx & Jansen, 2018;Wick et al., 2015).
Furthermore, our study creates a space for debate and reflection with regard to the COVID-19 pandemic.
The IDEAS framework proposes a set of key strategic points regarding challenges and trends in the field and highlights the most urgent aspects that need to be addressed. The pandemic has demonstrated that many higher education institutions remain strategically unprepared to provide quality education in times of crisis, as seen in the difficulties reported globally when educators were forced to move their classes online (Gasevic, 2020). IDEAS provides a response that can help in a priori strategic and organisational planning as a robust method to prepare for the near future of higher education.
In summary, this article provides an overview of the current changes, trends, and challenges at the centre of higher education's transformation, highlighting pedagogical innovation supported by technology as a core aspect. The IDEAS framework is intended to be indicative rather than comprehensive and descriptive rather than prescriptive. It is proposed as a guide to identify crucial issues and to support decision making, organisational planning, and structural design, thus developing strategies for institutions to remain at the cutting edge of transformational higher education while addressing challenges and concerns. It can be used to spark reflective thinking, brainstorming, debate, and imaginative planning for future policies at the institutional and cross-institutional levels. Finally, it can be applied to further research in the various associated themes being investigated by providing focal points to develop and explore hypotheses. | 6,965 | 2021-01-29T00:00:00.000 | [
"Education",
"Computer Science"
] |
Magnetic functionalization and Catalytic behavior of magnetic nanoparticles during laser photochemical graphitization of polyimide
We report laser-assisted photochemical graphitization of polyimides (PI) into functional magnetic nanocomposites using laser irradiation of PI in the presence of magnetite nanoparticles (MNP). PI Kapton sheets covered with MNP were photochemically treated under ambient conditions using a picosecond pulsed laser (1064nm) to obtain an electrically conductive material. Scanning electron microscopy of the treated material revealed layered magnetic nanoparticles/graphite nanocomposite structure (MNP/graphite). Four probe conductivity measurements indicated that nanocomposite has an electrical conductivity of 1550 S/m. Superconducting quantum interference device (SQUID) magnetometer-based magnetic characterization of the treated material revealed an anisotropic ferromagnetic response in the MNP/graphite nanocomposite compared to the isotropic response of MNP. Raman spectroscopy of MNP/graphite nanocomposite revealed a four-fold improvement in graphitization, suppression in disorder, and decreased nitrogenous impurities compared to the graphitic material obtained from laser treatment of just PI sheets. X-ray photoelectron spectroscopy, x-ray diffraction, and energy-dispersive x-ray spectroscopy were used to delineate the phase transformations of MNP during the formation of MNP/graphite nanocomposite. Post-mortem characterization indicates a possible photocatalytic effect of MNP during MNP/graphite nanocomposite formation. Under laser irradiation, MNP transformed from the initial Fe3O4 phase to {\gamma}-Fe2O3 and Fe5C2 phases and acted as nucleation spots to catalyze the graphitization process of PI.
Introduction
Graphite and graphitic materials possess low shear resistance (soft), excellent thermal and electrical conductivity, high stiffness and strength, thermal stability at extreme temperatures (>3600℃), and selective chemical reactivity.These properties enable diverse applications of graphitic carbon 1 , including electronics, mechanical components (lubricant, metallurgical, etc.), energy storage (batteries), and energy generation (nuclear) 2 .Natural graphite is the most common resource utilized in industry.However, the increasing demand and deficit of high-quality natural graphite, especially within the US 3 , severely impacts the supply chain.Challenges with artificial graphite formation involve extreme temperature (>2000℃) thermal decomposition of polymers under inert atmospheric conditions.This results in 25% process efficiency and energy requirement of 4.5kWh/kg 4 , exceeding the natural graphite price by up to 10X.
Laser-induced reduction of polymers into graphite provides a unique perspective towards room temperature and energy-efficient production of artificial graphite 5 .The concept of laserinduced transformation of diamond to graphite was established in the early 2000s 6 .In 2014, Lin et al. 7 reported the laser-induced transformation of polymeric films (polyimide, PI) into a 3D porous graphene structure (laser-induced graphene, LIG) using a CO2 microsecond laser.Initial studies of LIG transformation were conducted on PI and polyether imide (PEI) that contain nitrogen groups and aromatic chains 5,7,8 .These polymers were selected due to their existing hexagonal ring structure and the presence of nitrogen within the polymerizing group.The ring structure plays a crucial role in the sp 2 -hybridized stacking of the graphene layers, while the nitrogen creates a protective blanket to prevent incineration during the lasing process.
Nevertheless, the current LIG graphitization process with a microsecond CO2 pulsed laser induces a disordered hexagonal structure dominated by heptagonal and pentagonal irregularities 5 primarily due to a photothermal dominant conversion mechanism.
The structure and dynamics of the surface treated by pulsed-laser irradiation strongly depend on the laser pulse timescale 6,9,10 .Upon incidence, the laser pulse transfers its energy to the electron system.Typically, the electron-lattice relaxation time is in the order of picoseconds (10 - 12 s) 11 .The electron transfers its kinetic energy into the lattice to bring the system to a thermal equilibrium (photothermal) for nanosecond and large pulse-width lasers.With ultrashort pulsed (pico and femtosecond) lasers, the laser energy is transferred to the electrons in a shorter time than the electron-lattice relaxation period.Thus, the energy remains within the electron gas (photochemical) and is not dissipated within the lattice.The electron system (hot) is expected to stay in equilibrium, while the lattice is cold.Compared to a nanosecond or microsecond laser, which can generate in the order of >2000 K in the lattice, a pico-/femtosecond can generate >10,000 K of equivalent temperature in the electron system 6 .
LIGs have been functionalized with nanomaterials to enable applications in energy storage and sensors 5,8 , micro supercapacitors 12,13 , electrocatalysts 7,14,15 , and other sensors 16,17 .However, to our knowledge, the synthesis of magnetic-functionalized graphitic materials using laser-assisted transformation of polymers has not been reported.In this work, we report the photochemical synthesis of magnetic nanoparticle/graphite nanocomposite using ultrashort pulse picosecond lasers and the photocatalytic behavior of magnetite (Fe3O4) nanoparticles on the conversion process (Fig. 1).The laser-induced graphitization and magnetic behavior of the composite is analyzed.Post-mortem characterization of morphology, phase evolution, and graphitization are performed to elucidate and understand the possible conversion pathways during laser processing.
Synthesis of magnetite nanoparticles:
The magnetite nanoparticles were synthesized by hydrothermal co-precipitation of iron (II, III) salts with NaOH.For the synthesis, 1.75 gm (6.3mmol) of ferrous sulfate heptahydrate (FeSO4.7H2O,Fisher) and 2.92 gm (10.8mmol) of ferric chloride hexahydrate (FeCl3.6H2O,Fisher) powders were added to 50mL of deionized water.The mixture was heated under Argon atmosphere and constant stirring to 80℃ using an oil bath.The pH of the mixture was first adjusted to 11 using NaOH, and 0.02g/mL of citric acid was added till pH reached 4.0.The solution was stirred at 80℃ for 90 minutes under Ar.Subsequently, the magnetite was precipitated using 1M NaOH, followed by alternating three washing steps with water and acetone.Finally, the collected residue was dried in a vacuum oven at 80℃, at 0.5 atm, and under an Ar blanket for three hours until dry.
Synthesis of graphitized film and graphene/magnetite nanocomposite: Kapton polyimide film of 250µm thickness was laser irradiated in the air with 12ps pulsed 10Khz pulse frequency 1064 nm Nd-YAG solid-state laser (Fianium Inc.) to obtain graphitized film.The laser was operated at 2.8W power with a spot diameter of 60µm and a scan speed of 8mm/s.Graphene/magnetite nanocomposite synthesis was accomplished in three steps.The dried magnetite nanoparticles were crushed into a powder with a mortar pestle and sonicated in methanol for 15 minutes.Three suspensions of magnetite were made with 200 mg of powder in 3 mL (high), 6 mL (medium), and 12 mL (low) methanol.In the first step, 0.5 mL of the high-concentration suspension was applied to the Kapton, and the nanoparticle-covered surfaces were irradiated with a laser power of 0.8W and 8mm/s scan speed.In the second stage, the laser-treated surfaces were coated with 0.5 mL of the medium concentration nanoparticle suspension and irradiated with laser.
In the final step, a similar volume of low-concentration nanoparticle suspension was applied to the surface, and the particle-covered surface was again irradiated with a laser at the same processing parameters.
Materials Characterization:
The magnetite nanoparticles suspended in methanol were characterized for particle size using a Malvern Zetasizer Nano ZS system (DLS).The phase evolution of the magnetite and functionalized nanocomposite was characterized using a Bruker D8 x-ray diffractometer (XRD).The photochemical transformations of the PI to graphene nanocomposite during the laser processing were analyzed using a Kratos Amicus x-ray photoelectron spectrometer (XPS).The degree of graphitization was inspected with a Raman spectrometer (Spectra-Physics Excelsior 532-50-CDRH).The morphology and elemental composition of the nanocomposite were imaged with FEI Tenio field emission scanning electron microscope (SEM) with energy dispersive x-ray spectroscopy (EDS).The magnetic property of the nanoparticles and nanocomposite was measured using a Quantum Design SQUID magnetometer at 300K with a maximum applied field of 7T.A four-point probe instrument (Jandel Inc.) was used to characterize the surface conductance of the nanocomposite film.
Results and Discussions:
The synthesized MNP had particles with size distributions of 81 ± 40 nm and 138 ± 80 nm and a cumulative distribution of 110 ± 65 nm.The size distribution and the multiple function peak fit (Origin Lab 10) using the Voigt function to delineate two-particle distribution batches of the synthesized nanoparticles are shown in Fig. S1a and Table S1.The MH loop measurements of the synthesized particles, shown in Fig. S1b, indicated a maximum magnetization of 80 emu/g at 7T.
The MNP sample had a remanence of 1.56 emu/g and coercivity of 52 Oe, indicating a dominant ferromagnetic behavior and Fe3O4 phase in the synthesized material.
SEM images of the laser-treated Kapton film are shown in Fig. 2a-c.The picosecond laser used for graphitization has a beam diameter of 60µm.The central ~20µm region of maximum laser intensity ablated the polyimide, creating grooves (Fig. 2a, b) on the top surface.The cross-sectional image of transformed material (including in Fig S2 ) shows that transformed material has an average thickness of 160 µm and has a closed foam-like porous structure with pore wall thickness of 1µm.This primarily resulted from the rapid electronic excitation and molecular dissociation of the PI chains into layered graphite and in contrast to an open foam porous mesh-like graphene network from CO2 microsecond laser irradiation reported in the literature 7 .The closed foam-like structure indicates that the polymer did not undergo excessive expansion during photolysis, thus maintaining a dense microstructure.A high magnification micrograph of a single layer of the closed foam (Fig. 2c) reveals a folded multi-layered graphene sub-structures with thicknesses of The EDS analysis of the treated surface was performed to investigate the elemental composition (Fig. 2d-f and Table S2).The analysis indicated a carbon-dominant structure with no surface nitrogen observed.The small fraction of elemental oxygen was detected and could be attributed to undissociated C-O bonds within PI starting material and atmospheric oxidation during the laser ablation process.However, the relatively small oxygen presence within the sample confirms a successful photochemical dissociation of the PI material.The ultrashort irradiation from the picosecond laser processing minimizes the scope of surface oxidation resulting from the thermal excitation of the lattice.In addition, the nitrogen release from the dissociation of the PI structure may protect the polymer from oxidation during the laser treatment process.An insignificantly small phosphorous content was detected, which could be from sample or SEM chamber contamination.SEM images of the synthesized MNP-graphite sample are presented in Fig. 3a, b, and the cross-section is shown in Fig S3(a).The SEM images reveal a closed foam porous structure with thinner cell walls compared to laser-treated Kapton film shown in Figure 2. The graphite layers were interlaced with a dispersed network of magnetite nanoparticles and particle agglomerates, as shown in Fig 3(b), (c), and S3b.The cell walls of the composite structure were thinner than the laser-assisted graphite, with a mean thickness of 400 ± 20 nm.A closer inspection of the graphite folds (Fig. 3b) revealed that the MNP particles agglomerate more at the edge than the basal plane.
Plausibly, the energy from the laser impact near the center of the beam (ablated area) concentrated the particles over the edge planes.The particle size measured from the SEM images varied from 100 ± 30 nm for the smallest individual spherical nanoparticles to 900 ± 150 nm for the particle agglomerates.The EDS map for the cell wall with particle deposition (Fig. 3c) indicates oxygen (primarily) and iron concentration on the particle spots over a carbon background.The nitrogen signal was mainly background with no quantitative indication.The map indicates iron oxide particles embedded on a carbonaceous surface.The strong oxygen signal from the iron oxide particles could be inferred as surface oxidation of the magnetite nanoparticle to a ferric oxide (Fe2O3) composition.A point-EDS analysis of a lower magnification SEM was performed to observe compositional variations in the composite material (Fig. 3d, e, and Table 1).The primary elemental composition signals were from C, N, O, and Fe, with certain impurities from Na and Cl.
In spectrum 5, an almost equal Fe:O ratio was observed, with ~25 at.%Fe and 43 at.% of C. The low O to Fe concentration ratio indicates the elimination of oxygen during laser processing and plausibly iron carbide phase formation.While in spectrum 6 (shown in Table 6), the region primarily contained carbonaceous material (88 at.%) and with a low Fe:O ratio (2:1).The excess O to Fe indicated oxidation of magnetite to ferric oxide with the residual oxygen in the carbon material (similar to Fig. 3c).The absence of N signal in spectrums 5 and 6, indicated a complete dissociation of the PI film.Spectrum 7 (shown in Table 6) primarily contained C, N, and O signals, inferring a possibly unconverted PI impurity in the treated sample.The Na and Cl impurities in spectrum five are acknowledged to be from the MNP synthesis process, where the Na + (NaOH) neutralizes with Cl -(FeCl3) to form NaCl.
The magnetic characterization MNP-graphite sample in Fig 3f revealed a ferromagnetic
behavior with a maximum magnetization of 17.9 emu/g at 7T, a remnant magnetization of 0.465 emu/g, and a coercivity of 67.8 Oe.Comparing the saturation magnetization value with the assynthesized MNP (Fig. S1b), the loading fraction from the MNP in the graphite was calculated to be 20.2 wt.%.The small anisotropy observed between the x, y, and z magnetization direction could be due to process-induced anisotropy from laser treatment or a systematic effect from the initial PI film.The surface resistance of the electrically conductive film obtained from laser treatment of PI and magnetite/PI nanocomposite films were measured to be 10.0 ± 0.5 and 4.0 ± 0.5 Ω/□, respectively.The surface resistances are lower than the reported surface resistance range of 17-28 Ω/□ for LIG from CO2 lasing 5 .The average thickness of the converted material was computed from cross-sectional images of the two films (shown in Figs.S2 and S3) and resulted in electrical 2).The compositional analysis supported the magnetic characterization of the powders (shown in Fig S1(b)), wherein the high magnetization was predominantly from the magnetite phase in the MNP powder.XRD spectrum of the magnetitegraphite composite revealed peaks from several phase compositions, including graphite, maghemite, and iron (II, III) carbide.The primary graphite peaks were at 30.72⁰ and 52.14⁰ (Pearson 1817309 22 ).Although magnetite and maghemite peaks are relatively close in XRD, the observed lattice parameter, by refining the fitted curve for the cubic crystal, was estimated to be 8.34Å (maghemite) compared to magnetite (at 8.39Å).The primary maghemite peaks fitted with the spectrum were 35.28⁰, 41.64⁰, 50.76⁰, 67.66⁰, and 74.62⁰.In addition, several Fe5C2 (Pearson 1617122 23 ) peaks located at 39.42⁰, 41.82⁰, 43.32⁰, 46.06⁰, 47.80⁰, 48.20⁰, 50.08⁰, 50.94⁰, 51.78⁰, 52.38⁰, 52.84⁰, 55.5⁰, 59.06⁰, 60.64⁰, and 68.86⁰ were observed.Refinement and analysis of the MNP-graphite spectrum suggested 21.5 wt.% of maghemite (γ-Fe2O3), 4.0 wt.% of iron carbide (Fe5C2), and 74.5 wt.% of graphite with residual polyimide (Table 2).The calculated composition from the XRD spectrum matches within 6% of the estimated magnetic loading fraction from the MH characterization (Fig. 3f).XPS analysis of MNP-graphite surface composite revealed elemental compositions containing primarily C, O, N, and Fe (Spectrum and fits shown in Fig. S4 and analysis results in Table S3).Deconvolution of the wide spectrum revealed a carbonaceous surface (37.7 at.%) with iron (18.55 at.%) and oxygen (42.28 at.%) species.The atomic ratio Fe to O suggests Fe (III) or maghemite composition with excess surface oxygen from impurities in the decomposed carbon material.
Narrow spectra of the MNP-graphite film surface corresponding to C, O, Fe, and N elements and their corresponding deconvolutions are shown in Figs 5a,b,c, and d, respectively.
Therefore, the XPS analysis suggested that the treated MNP-graphite composite surface was composed of a graphitic structure with maghemite and iron carbide as the magnetic phases and carbon-oxygen species as the unconverted impurities.The morphological and chemical characterization of the laser-treated PI and MNP-graphite film surface indicates that the laser graphitization of polyimides under ultrashort pulsed laser irradiation follows a photolytic transformation mechanism.Initially, the incident laser pulse interacts and couples to an electron system, transferring the electromagnetic energy as kinetic (thermal) and potential (band gap) energy.Typically, the electron-lattice relaxation time is in the order of picoseconds (10 -12 s) 11 .In picosecond lasers, the laser pulse width of the beam transferred to the electrons is shorter than the electron-lattice relaxation period.The potential energy in the band gap does not get the opportunity to convert into kinetic energy (via the Auger process) and dump into the lattice via electron-ion collisions to bring the system to a thermal equilibrium.
Instead, the energy remains within the electron gas and is not dissipated within the lattice.Since the electron gas relaxation time is 10 -14 s 18 (< laser pulse width), the electron system (at >10,000K) equilibrates itself but remains at a significantly higher temperature than the lattice (at 300K) 6 .The polyimide chain, consisting of benzene rings with associated imide functional groups (Fig. 1a), acts as an excellent starting material due to the ring structure and presence of nitrogen-containing molecules.The ring structure from the benzene (hexagon) and imide (pentagon) re-organize into sp 2 hybridized graphitic structure during photodecomposition.The N within the imide group dissociates into free radicals within the plasma, either absorbing the evolving oxygen radical to form NOx or forming molecular N2 and protecting the treated region from further oxidation.The higher electronic excitation of picosecond pulsed lasers allows for rapid molecular dissociation without distorting the lattice structure and enables a layered graphitic material that forms the walls of the closed foam structure observed in the SEM micrographs.
The chemical phase evolution characterized using XPS, EDS, and XRD suggests that magnetite nanoparticles convert to maghemite and iron carbide, acting as nucleation sites that couple laser irradiation onto the polyimide surface and catalyzing the photolysis mechanism.The MNP accelerates the oxygen elimination of the C structure during conversion.Naturally, the maghemite is formed due to vacancies created by ejection of Fe 2+ from the magnetite crystal lattice and represented as ⊡ , where the Fe II atoms are vacant in the octahedral sites.During the laser conversion process, the Fe from magnetite reacts with the carbon to form the iron carbide phase (Fe5C2), thus reducing the magnetite to maghemite.This process binds the carbon to the nanoparticles while eliminating the N and O species into the plasma plume.The laser irradiation of PI surface in the presence of MNP results in a higher degree of graphitization and higher electronically conductive films than laser irradiation of PI alone.These results suggest that nanoparticles play a significant role as a photocatalyst to accelerate the photolytic decomposition process and get incorporated into the graphitized materials forming magnetite/graphite nanocomposite.
Conclusion
We report the photochemical decomposition of magnetite-functionalized graphite nanocomposite from the photochemical decomposition of polyimide film in the presence of magnetite nanoparticles using a picosecond pulsed laser.Utilizing ultrashort pulsed lasers induces photochemical transitions in polymers, resulting in a graphitic film with excellent electronic conductivity and the morphology of closed foam enclosed within planar surfaces.Irradiation of magnetite nanoparticle-covered PI surface resulted in an MNP-graphite nanocomposite with higher electronic conductivity and closed foam morphology where magnetite nanoparticles and particle agglomerates are dispersed in the cell walls.Magnetic measurements of the nanocomposite indicated ferromagnetic behavior with process-induced magnetic anisotropy.Raman spectra of the MNP-graphite composite had a four times higher degree of graphitization than laser-converted graphite.Additionally, A fourfold increase in electrical conductivity was measured due to magnetite nanoparticle inclusion in the MNP/graphite nanocomposite.Characterization results suggest that magnetite nanoparticles suppress the N impurities and turbostatic disorders of the graphitic structures formed during laser irradiation, suggesting a possible photocatalytic mechanism.EDS, XRD, and XPS analysis suggested that the magnetite nanoparticles acted as nucleation sites to distribute the laser energy for rapid photochemical conversion of PI film.The magnetite bonded with the carbon chain by forming iron carbide (Fe5C2) and eliminated O from the polymer structure by reducing to maghemite (γ-Fe2O3).Therefore, the magnetite nanoparticles are not only incorporated into the graphite to form a magnetic nanocomposite with higher economic potential but also catalyze the photochemical dissociation process for an energy-efficient laser graphitization.
Figure 1 .
Figure 1.Schematic and conversion mechanism of laser-assisted graphitization on polyimide with magnetite nanoparticles.
Figure 2 .
Figure 2. Laser-assisted graphitization of polyimide, a-c) SEM images of treated graphite surface at 250X, 1000X, and 10000X, respectively, d) Cross-section of the L and d -f) EDS point analysis of the treated surface.
Figure 3 .
Figure 3. Laser-assisted graphitization of polyimide film functionalized with magnetite nanoparticles, ab) SEM images of MNP-graphite nanocomposite at 1000X and 10000X, c -e) EDS analysis of the treated surface, EDS map © and point scans (d, e), and f) magnetic characterization (MH) at 300K and 7T.
conductivity of 620±10 S/m and 1800 ± 50 S/m for laser-treated PI and Magnetite/PI films, respectively.Both conductivity measurement and SEM analysis indicate the efficient transformation of PI into an electrically conductive closed foam material by a photochemical dissociation mechanism.Incorporation of the magnetite nanoparticles in the laser-treated material results in a nanocomposite film that is three times higher in electrical conductivity.The Raman spectrum of the laser-treated PI film is shown in Fig 4(a), labeled "laserassisted graphite," and reveals graphitization of PI film with an IG/ID ratio of 0.79 (<1).The large disorder (D) band indicated incomplete graphitization of the treated sample.A peak between 1700 cm-1 and 2000 cm-1 indicated incomplete conversion with signals for C=O containing impurities in the bulk of the treated sample 19 .Turbostatic disorder and misalignment of graphitic layers could be established from the significant D band at 1340 cm -1 and a broad 2D band at 2680 cm -1 .The Raman spectrum of the laser-treated MNP/PI film is also shown in Fig 4(a) labeled "MNPgraphite, and in comparison, to the laser-assisted graphite, the MNP-graphite sample showed almost six folds improvement in the degree of graphitization, with an IG/ID ratio of 4.6.The O impurity and the disorder peaks were almost suppressed.The significant improvements with adding MNP to the PI during laser processing indicated a possible catalytic effect of the magnetite nanoparticles.
Figure 3 .
Figure 3. Phase evolution of magnetite during laser photochemical decomposition of polyimide functionalized with MNP, a) Raman spectra of graphite without and with MNP functionalization, and b) XRD of MNP-graphite composite.XRD characterization of the initial MNP powders (included in supplementary information Fig S5) and post-laser treatment MNP-graphite film shown in Fig 4b were analyzed to understand the phase evolution during the laser treatment.The initial MNP as-synthesized powder (XRD results shown in Fig.S3) is composed of magnetite (Fe3O4, Pearson 181655220 ) and maghemite (γ-Fe2O3 Pearson 45263321 ) phases.Analysis of the peaks revealed the sample was composed of 95 wt.% of magnetite and five wt.% maghemite (Table2).The compositional analysis supported
Table 2 .
Phase and compositional analysis of XRD for magnetite nanoparticles and MNP-graphite composite. | 5,114.2 | 2023-11-07T00:00:00.000 | [
"Materials Science",
"Physics",
"Chemistry"
] |
A Fusion-Based Approach for Breast Ultrasound Image Classification Using Multiple-ROI Texture and Morphological Analyses
Ultrasound imaging is commonly used for breast cancer diagnosis, but accurate interpretation of breast ultrasound (BUS) images is often challenging and operator-dependent. Computer-aided diagnosis (CAD) systems can be employed to provide the radiologists with a second opinion to improve the diagnosis accuracy. In this study, a new CAD system is developed to enable accurate BUS image classification. In particular, an improved texture analysis is introduced, in which the tumor is divided into a set of nonoverlapping regions of interest (ROIs). Each ROI is analyzed using gray-level cooccurrence matrix features and a support vector machine classifier to estimate its tumor class indicator. The tumor class indicators of all ROIs are combined using a voting mechanism to estimate the tumor class. In addition, morphological analysis is employed to classify the tumor. A probabilistic approach is used to fuse the classification results of the multiple-ROI texture analysis and morphological analysis. The proposed approach is applied to classify 110 BUS images that include 64 benign and 46 malignant tumors. The accuracy, specificity, and sensitivity obtained using the proposed approach are 98.2%, 98.4%, and 97.8%, respectively. These results demonstrate that the proposed approach can effectively be used to differentiate benign and malignant tumors.
Introduction
Breast cancer is the most common cancer in women worldwide and one of the major causes of death in females across the globe [1]. The statistics of the World Health Organization (WHO) indicate that, in 2012, 1.67 million new cases were diagnosed with breast cancer and around 522,000 women died of this disease [1]. Early diagnosis of breast cancer is crucial for the successful treatment of the disease and improving the survival rates of the patients [2].
Ultrasound imaging is one of the most widely used imaging modalities for breast cancer diagnosis since it offers the advantages of low-cost, portability, patient comfort, and diagnosis accuracy [3,4]. However, the interpretation of breast ultrasound (BUS) images is operator-dependent and varies based on the experience and skill of the radiologist [5].
To overcome this limitation, computer-aided diagnosis (CAD) systems have been introduced to analyze BUS images and provide the radiologist with a second opinion to improve the diagnosis accuracy and reduce the effect of operator dependency [5,6].
Many studies, such as [7][8][9][10][11][12][13][14][15], have employed BUS image analysis for classifying breast tumors. In particular, morphological features [13,16,17] and texture features [8,12] are demonstrated to be useful for differentiating benign and malignant tumors. Moreover, combining both feature groups has been suggested to improve the tumor classification accuracy [13,18]. Morphological features quantify the geometrical characteristics of the tumor, such as area, shape, orientation, regularity, and margins [6,19]. Therefore, morphological features are mainly affected by the accuracy of the tumor outline. Commonly used morphological feature descriptors include the aspect ratio [13,17], the best-fit ellipse of the tumor, the normalized radial length (NRL) [18,20], and the undulation characteristics [21]. Texture features quantify the pixel gray-level statistics in terms of intensity and spatial distribution [6]. Generally, the texture patterns of benign tumors are different from those of malignant tumors [10]. Therefore, several texture descriptors have been employed for classifying BUS images [22][23][24][25][26]. Among these descriptors, the gray-level cooccurrence matrix (GLCM) [27] is one of the most widely used texture analysis techniques for BUS image classification [12]. Conventional texture analysis often uses a single region of interest (ROI) to extract global texture features that quantify the texture characteristics of the entire tumor. One of the most common ROI selection procedures is to find the minimum bounding rectangle that encloses the tumor [9,12,22]. Another ROI selection approach is to find the maximum rectangle that fits inside the tumor [28]. Such ROIs can be drawn manually by a radiologist or detected automatically using a computer algorithm.
In many BUS images, the local texture patterns within the tumor vary from one region to another. Hence, the use of a single ROI, which enables the extraction of global texture features that quantify the entire tumor, might not support effective quantification of the local texture variations within the tumor. Moreover, the mismatch between the predefined structure of the ROI and the actual shape of the tumor might reduce the tumor classification accuracy. For example, consider the benign and malignant tumors shown in Figures 1(a) and 1(b), respectively. The texture patterns inside each tumor demonstrate local variations. For both tumors, the ROIs corresponding to the minimum bounding rectangle that encloses the tumor are presented in Figures 1(c) and 1(d). Both ROIs might not provide efficient extraction of texture features that can effectively quantify the local texture variations within the tumor. In addition, the ROI of each tumor extends beyond the tumor boundary, and hence the texture features extracted from such ROI are expected to quantify both the tumor and the surrounding healthy tissue. These limitations might lead to imprecise texture analysis of the tumor, which in turn can reduce the tumor classification accuracy.
To improve the tumor classification capability of ultrasound texture analysis, this study investigates the use of multiple ROIs to analyze the local pixel gray-level statistics inside the tumor. In particular, the tumor is divided into a set of nonoverlapping ROIs as illustrated in Figures 1(e) and 1(f). Each ROI is analyzed individually to extract local texture features. The texture features employed in this study are computed using the GLCM matrix. A local tumor class indicator is estimated for each individual ROI by classifying the texture features of that ROI using a well-trained classifier. The class of the tumor can be determine based on the multiple-ROI texture analysis by employing a majority voting mechanism to integrate the local tumor class indicators of all ROIs inside the tumor. The proposed multiple-ROI texture analysis approach enables effective quantification of the local texture patterns inside the tumor without incorporating texture patterns of the healthy tissue that surrounds the tumor.
One challenge of applying the proposed multiple-ROI texture analysis approach is to enable effective combination between the local texture features, which are extracted for each one of the multiple ROIs inside the tumor, with the morphological features that are computed for the entire tumor. Therefore, a novel probabilistic approach is proposed to fuse the tumor classification indicators obtained using the multiple-ROI texture analysis with the tumor classification indicator computed using morphological analysis of the entire tumor. The morphological analysis employed in this paper is based on set of morphological features introduced in previous studies [13,17,18,20,21,29] to quantify the shape and contour of the tumor.
To evaluate the performance of the proposed BUS image classification approach, both the multiple-ROI texture analysis and the fusion-based combination between the multiple-ROI texture analysis and morphological analysis are employed to classify a BUS image database that includes 64 benign tumors and 46 malignant tumors. These BUS images were acquired during ultrasound breast cancer screening procedures. The tumor classifications results of the proposed approach are compared with conventional texture (single ROI), morphological, and combined texture and morphological analyses.
The remainder of the paper is organized as follows. The data acquisition of the BUS image database is summarized in Section 2. Moreover, Section 2 describes the conventional texture and morphological analysis of BUS images, the proposed tumor classification approach, and the performance metrics employed to compare the conventional and proposed BUS image classification approaches. The experimental results and discussion are provided in Section 3. Finally, the conclusion is presented in Section 4.
Data Acquisition.
The collected image database consists of 110 BUS images of pathologically proven benign and malignant tumors (64 benign tumors and 46 malignant tumors). Detailed description of the types of benign and malignant tumors involved in this study is provided in Table 1. Each BUS image was acquired from one patient (i.e., the number of patients which participated in the study is 110). All participated patients were females. Moreover, each image included exactly one breast tumor. The age of the patients ranged from 25 to 77 years. The mean and standard deviation of the maximum diameters of the tumors are 14.7 mm and 6.0 mm, respectively. The BUS images were acquired during routine ultrasound breast cancer screening procedures at the Jordan University Hospital, Amman, Jordan, during the period between May 2012 and February 2016. Ultrasound imaging was performed using an Acuson S2000 ultrasound system (Siemens AG, Munich, Germany) and a 14L5 linear transducer with frequency bandwidth from 5 to 14 MHz. During imaging, the radiologist was free to adjust the configurations of the imaging system, including the focal length, depth, and gain to obtain the best view. For each BUS image, the tumor was manually outlined by a radiologist with more than 13 years of experience. The tumor outlines were also verified by another independent experienced radiologist. All images were resampled to the same resolution of 0.1 mm × 0.1 mm per pixel. The study protocol was approved by the ethics committee at the Jordan University Hospital. Moreover, informed consent to the protocol was obtained from each patient.
Quantitative Features.
Both texture and morphological features are used to classify the benign and malignant breast tumors. The following two sections describe both feature groups.
Texture Features.
The texture features employed in this study were computed using the GLCM matrix [27], which measures the correlations between adjacent pixels within a ROI. The computation of the GLCM matrix was performed using four distances ( = 1, 2, 3, and 4 pixels) and four different orientations ( = 0 ∘ , 45 ∘ , 90 ∘ , and 135 ∘ ). Therefore, sixteen GLCM matrices were computed for each ROI. Each GLCM matrix was analyzed, as described in [12], to extract twenty texture features (TF1-TF20) that are commonly used for ultrasound texture analysis [12,32]. These texture features are provided in Table 2. Thus, a total of 320 texture features were extracted from each ROI.
Length of the ellipse major axis [20] MF11
Six morphological features (MF11-MF16) are extracted from the best-fit ellipse that approximates the size and position of the tumor Length of the ellipse minor axis [20] MF12 Ratio between the ellipse major and minor axes [20] MF13 Ratio of the ellipse perimeter and the tumor perimeter [20] MF14 Overlap between the ellipse and the tumor [20] MF15 Angle of the ellipse major axis [20] MF16 Morphological NRL entropy [18,20] MF17 Two morphological features (MF17-MF18) are extracted from the NRL of the tumor NRL variance [18,20] MF18 normalized radial length (NRL) of the tumor [18,20]. The NRL is defined as the distance between the tumor center and the pixels located on the tumor boundary normalized to the maximum radial length of the tumor [18]. The eighteen morphological features are summarized in Table 2. of texture, morphological, and combined texture and morphological features that reduce the misclassification error between malignant and benign tumors. In fact, exhaustive search for the optimal feature combination requires extensive computational resources and long processing times, particularly when the number of features is large. For example, the total number of all potential combinations of features into subsets is equal to (1/ !) ∑ =0 (−1) − ( ) [33]. Therefore, a two-phase heuristic approach, which is based on the feature selection procedures described in [12,34], is employed to carry out feature selection. In the first phase, the features are ranked according to the minimal-redundancy-maximalrelevance (mRMR) criterion [34], which is based on mutual information. The top l-ranked features are incrementally grouped and their classification performance is evaluated, for all = {1, 2, . . . , }, where is the total number of features. The smallest feature group that can achieve the minimum classification error is taken as the candidate feature subset. In the second phase, the backward selection algorithm is applied to the candidate feature subset. In this algorithm, features are sequentially eliminated until the removal of further features leads to degrading the classification accuracy. This two-phase algorithm enables the selection of a compact feature subset that can achieve effective tumor classification. The selected features are classified using a binary SVM classifier [35] that is implemented using the LIBSVM library [36]. In binary SVM, the input features are mapped into a high dimensional feature space by applying a kernel function. This mapping enables the computation of a nonlinear decision function that can separate the feature space into two regions, one for each class. Specifically, given a training set = {(x 1 , 1 ), . . . , (x , ), . . . , (x , )}, where x ∈ represents the kth feature vector and ∈ {−1,+1} is the corresponding tumor class. The goal of SVM is to determine a decision boundary in the form of hyperplane that can separate the feature space into two regions through maximizing the margin between the samples of different classes. The resultant decision function is defined as follows:
Conventional Tumor
where x ∈ is a new feature vector to be classified into benign or malignant, (x , x) is a kernel function that maps the input vectors into high dimensional space, is the kth Lagrange multiplier, and is the bias term of the decision hyperplane. Several kernel functions can be used with SVM. However, the Gaussian radial basis function (RBF) is by far the most commonly used kernel function for classification tasks [37]. In this work, the RBF kernel is employed. The RBF kernel function can be defined as follows: where > 0 is the RBF kernel parameter. The performance of the SVM classifier with RBF kernel depends on two parameters: , the RBF kernel parameter, and > 0, the regularization parameter. The tuning of the two parameters is carried out using a grid-based search of the two-dimensional parameter space 1 < < 100 and 1 < < 100. The search is performed with a step length of 1. The best SVM model is selected such that its parameters maximize the average tumor classification accuracy.
The performance evaluation of the conventional tumor classification is performed using the single ROI GLCM texture features, the morphological features, and the combined single ROI texture features and morphological features. Similar to the work of Wu et al. [13], the evaluation is carried out using a fivefold cross-validation procedure. In this procedure, 80% of the tumors are selected for training and the remaining 20% is used for testing. This process is repeated five times so that each of the 110 BUS images is included once in the testing.
The Proposed Tumor Classification
Approach. The architecture of the proposed tumor classification approach is illustrated in Figure 3. In this architecture, the multiple-ROI texture analysis is carried out by dividing the tumor into small, nonoverlapping ROIs and extracting local texture features from each individual ROI. Moreover, the tumor is analyzed to extract morphological features. To combine the local texture features of the individual ROIs and the global morphological features, two independent posterior tumor class likelihoods are obtained separately from the multiple-ROI texture analysis and the morphological analysis. Moreover, decision fusion is applied to fuse both tumor class likelihoods and determine the class of the tumor.
To perform the multiple-ROI texture analysis, the tumor is divided into a set of uniform, nonoverlapping ROIs, as shown in Figures 1(e) and 1(f). The size of the ROIs is estimated by considering three factors: preserving the capability of differentiating various texture patterns, reducing the possibility of including different local textures within the same ROI, and ensuring that the entire tumor is adequately covered by the ROIs. The study by Valckx and Thijssen [38] suggested that the use of very small ROIs might degrade the capability of differentiating various texture patterns. On the other hand, the use of large ROIs increases the possibility of including different local texture patterns within a single ROI. Moreover, the use of large ROIs might lead to big gaps: that is, areas that are not covered by the ROIs, at the tumor boundary. For example, consider Figures 4(a), 4(c), and 4(e) that show the benign tumor in Figure 1(a) divided into uniform ROIs of size 0.5 × 0.5 mm 2 , 1 × 1 mm 2 , and 2 × 2 mm 2 , respectively. Moreover, consider Figures 4(b), 4(d), and 4(f) that show the malignant tumor in Figure 1(b) divided into ROIs of sizes 0.5 × 0.5 mm 2 , 1 × 1 mm 2 , and 2 × 2 mm 2 , respectively. The use of the 0.5 × 0.5 mm 2 ROIs minimizes the possibility of including different local textures within a single ROI and reduces the gaps at the tumor boundary. However, the small size of the ROIs, which corresponds to 5 × 5 pixels, might limit the ability of the texture analysis to differentiate various texture patters. On the other hand, the use of the 2 × 2 mm 2 ROIs, which correspond to 20 × 20 pixels, enables better texture classification but increases the possibilities of including different local textures within the same ROI and producing large gaps at the tumor boundary. The 1 × 1 mm 2 ROIs, which correspond to 10 × 10 pixels, provide a reasonable balance between the need to use ROIs of reasonable size to enable effective texture analysis and the requirements of reducing the possibility of crossing different local textures within a single ROI and achieving adequate coverage of the entire tumor. Hence, the size of the ROIs employed in this study is set to 1 × 1 mm 2 . Each ROI is processed individually to extract the GLCM texture features described in Section 2.2.1. The two-phase feature selection algorithm described in Section 2.3 is employed to determine the subset of texture features that enables the best tumor classification accuracy based on the multiple-ROI texture analysis. A binary SVM classifier with RBF kernel is used to classify each ROI as benign or malignant using the selected subset of texture features. The tuning of the SVM parameters is achieved using the grid-based search described in Section 2.3. The posterior tumor class likelihood of each ROI is estimated from the SVM output using Platt's approach [39]. Then, a majority voting mechanism is used to determine the class of the tumor based on the classification indictors of the individual ROIs. In particular, if more than 50% of the ROIs in the tumor are classified as malignant, then the tumor is considered malignant. Otherwise, the tumor is considered benign. The computation of the posterior likelihood of the tumor is performed by averaging the posterior tumor class likelihoods of the ROIs that agree with the class of tumor estimated using the multiple-ROI texture analysis.
To perform the morphological analysis, the extraction and selection of the morphological features as well as the tuning of the SVM classifier match those of the conventional morphological-based classification that was described in Section 2.3. Moreover, the tuned SVM is used to classify the tumor based on the selected morphological features and Platt's approach is applied to compute the posterior tumor class likelihood of the entire tumor.
For a given BUS image, the posterior tumor class likelihood obtained using the multiple-ROI texture analysis is mutually independent from the posterior tumor class likelihood estimated using the morphological analysis. Therefore, the fusion of the tumor class decisions obtained using these two independent analyses can be performed using a Gaussian Naive-Bayes approach [40].
To apply the Gaussian Naive-Bayes approach, consider a vector of continuous decisions D = [ 1 , . . . , ] obtained from different classifiers for a specific BUS image. The probability that the BUS image belongs to class given decisions of the different classifiers can be written as where for binary classification, which is considered in this study, ∈ {−1, 1} and = 2. Using the mutual independence assumption between the two classifiers, (3) can be rewritten as The term ( 1 , . . . , ) is a normalization factor. Therefore, a BUS image can be classified based on the combined decisions from the = 2 classifiers using the following decision rule: where ( | ) is assumed to be a multivariant normal distribution with mean vector and covariance matrix ∈ × . The class prior probability ( ) and the parameters ( , ) are estimated using maximum likelihood [41].
The performance evaluation of the proposed tumor classification approach is carried out using two different configurations. In the first configuration, the tumor is classified using the multiple-ROI texture analysis only. In the second configuration, tumor classification is carried out by fusing the posterior tumor class likelihoods of the multiple-ROI texture analysis and the morphological analysis. In both configurations, the fivefold cross-validation procedure described in Section 2.3 is employed. It is worth noting that the selection of the ROIs during the fivefold SVM training and testing of the multiple-ROI texture analysis was tumor-specific. In other words, in each fold of the crossvalidation procedure, the training was performed using ROIs that belong to 80% of the tumors, while the testing was carried out with the ROIs of the remaining 20% of the tumors.
where TP is the number of true positive cases, TN is the number of true negative cases, FP is the number of false positive cases, and FN is the number of false negative cases. The relationships between specificity and sensitivity, achieved using the conventional and proposed classification approaches, are analyzed by drawing the receiver operator characteristic (ROC) curves. Moreover, the area under the ROC curve (AUC), which quantifies the overall performance of a CAD system, is computed for each classification approach.
To confirm the effectiveness of the proposed fusionbased approach, paired tests were carried out on average classification accuracies to compare the fused multiple-ROI texture and morphological analyses with the other four classification approaches.
The execution times of the conventional texture, morphological, and combined texture and morphological analyses are compared with the proposed multiple-ROI texture analysis and the fused multiple-ROI texture and morphological analyses. The compression was performed by implementing the five approaches using MATLAB (MathWorks Inc., Natick, Massachusetts, USA) and executing them on a computer workstation that has a 3.5 GHz processor and 16 GB of memory and runs Ubuntu Linux operating system. For each one of the five classification approaches, the total time required to extract the features and classify the BUS image was recorded for twenty trials.
Results and Discussion
The tuned values of the SVM parameters ( , ) that are used to carry out tumor classification using the conventional texture features, morphological features, and combined texture and morphological features are equal to (3,56), (3,50), and (2,50), respectively. Moreover, the tuned values of ( , ) that are employed to perform tumor classification using the proposed multiple-ROI texture analysis are equal to (4,55). To carry out the fusion-based tumor classification, both the multiple-ROI texture analysis and the morphological analysis are performed using their optimized SVM parameters (i.e., the parameters (4,55) are used for the multiple-ROI texture analysis and (3,50) are employed for the morphological analysis).
The results achieved by the proposed tumor classification approach using the multiple-ROI texture analysis as well as the fused multiple-ROI texture and morphological analyses are shown in Table 3 with respect to the pathological findings. Both configurations of the proposed approach achieved effective classification of benign and malignant breast tumors. However, the fusion of the multiple-ROI texture analysis and morphological analysis enabled higher classification performance than that obtained using the multiple-ROI texture analysis alone.
The six objective performance metrics obtained for the proposed classification approach and conventional classification approach are presented in Table 4. The conventional classification approach achieved better performance by combining the texture and morphological features than that Table 4: Objective performance metrics obtained using the (a) conventional classification approach using texture features, (b) conventional classification approach using morphological features, (c) conventional classification approach using both texture and morphological features, (d) proposed classification approach using multiple-ROI texture analysis, and (e) proposed classification approach using the fused multiple-ROI texture analysis and morphological analysis. obtained by only using the texture features or the morphological features. This finding agrees with the results reported in previous studies [13,14]. Moreover, the classification results demonstrate that the proposed approach using the multiple-ROI texture analysis outperforms the conventional classification using the texture, morphological, and combined texture and morphological features. In particular, the multiple-ROI texture analysis achieved classification accuracy of 95.5%, specificity of 93.8%, sensitivity of 97.8%, PPV of 91.8%, NPV of 98.4%, and MCC of 90.9%. The optimal classification performance was achieved by the proposed approach using the fused multiple-ROI texture analysis and morphological analysis. Specifically, the fusion of the multiple-ROI texture and morphological analyses enabled classification accuracy of 98.2%, specificity of 98.4%, sensitivity of 97.8%, PPV of 97.8%, NPV of 98.4%, and MCC of 96.3%. The ROC curves of the conventional classification approach and the proposed classification approach are shown in Figures 5 and 6, respectively. The AUC values obtained for the conventional classification using the texture features, morphological features, and combined texture and morphological features are equal to 0.902, 0.912, and 0.948, respectively. The proposed classification approach achieved AUC values of 0.963 using the multiple-ROI texture analysis and 0.975 using the fused multiple-ROI texture and morphological analyses. These results confirm the superior performance of the proposed classification approach compared to conventional BUS image classification.
The values obtained using the paired tests to compare the proposed fused multiple-ROI texture and morphological analyses with the other four classification approaches at confidence level of 0.05 are shown in Table 5. The results reported in Table 5 demonstrate that the fusion-based approach outperforms significantly the conventional classification using the texture features, morphological features, and combined texture and morphological features as well as the multiple-ROI texture analysis.
According to these results, our proposed tumor classification approach achieved high sensitivity of 97.8% using both the multiple-ROI texture analysis and the fused multiple-ROI Figure 6: The ROC curves of the proposed classification approach using the multiple-ROI texture analysis, the morphological analysis, and the fused multiple-ROI texture analysis and morphological analysis. benign tumors can be minimized. These results also suggest that the proposed approach has the potential to provide the radiologists with a second opinion that effectively reduces the rate of misdiagnosis. The mean ± standard deviation execution times of the multiple-ROI texture analysis and the fused multiple-ROI texture and morphological analyses are 72.20 ± 2.14 s and 73.66 s ± 2.19 s, respectively. In comparison, the mean ± standard deviation execution times of the conventional texture, morphological, and combined texture and morphological analyses are 0.16 ± 0.03 s, 1.47 ± 0.18 s, and 1.63 ± 0.19 s, respectively. Although the multiple-ROI texture analysis and the fused multiple-ROI texture and morphological analyses are slower than the conventional classification analyses, both proposed classification approaches require around one minute to classify the BUS image. Such execution times do not limit the application of the proposed classification approaches in CAD systems that aim to provide an accurate second opinion to the radiologist.
The results reported in this study indicate that the proposed multiple-ROI texture analysis outperforms the conventional texture analysis in which texture features are extracted from a single ROI that includes the tumor. As mentioned in the Introduction, many breast tumors might have complicated texture patterns that vary from one region to another inside the tumor. Therefore, the multiple-ROI texture analysis enables effective quantification of the different local texture patterns inside the tumor. Another factor that might contribute to the improved performance of the multiple-ROI texture analysis is its ability to analyze the local texture patterns of the tumor without incorporating texture patterns of the surrounding healthy tissue.
The use of small ROIs for tissue characterization has been employed by other ultrasound-based methods. For example, in quantitative ultrasound imaging of cancer [42,43], the raw ultrasound radio-frequency (RF) signals are divided into small ROIs, and each ROI is analyzed to extract spectral features for tissue characterization. Moreover, a recent study by Uniyal et al. [44] has compared the classification performance of a combination of ultrasound-based texture, spectral, and RF time series features that are extracted from the entire breast tumor with the performance obtained by dividing the tumor into 1 × 1 mm 2 ROIs and extracting similar ultrasound-based features from each individual ROI. This study demonstrates that the classification performance obtained by classifying the individual 1 × 1 mm 2 ROIs outperforms the classification results achieved by classifying the entire tumor. This finding agrees with our proposed multiple-ROI texture analysis approach.
The multiple-ROI texture analysis has been applied in the current study to improve the classification performance of GLCM texture features. Our future directions include extending the multiple-ROI texture analysis approach to incorporate other statistical texture methods that use a ROI to extract texture features. The proposed approach can also be extended by performing multiresolution texture feature extraction, in which ROIs of different sizes are employed to carry out the multiple-ROI texture analysis. Moreover, the probabilistic approach, which has been used in this study to fuse the multiple-ROI texture analysis with the morphological analysis, can be expanded to support the fusion of multiple classification results obtained using various texture and morphological methods with the goal of achieving higher accuracy, specificity, and sensitivity.
One important factor that affects the tumor classification performance is the ability to accurately outline the tumor. In particular, imprecise outlining of the tumor might influence the morphological features that quantify the shape and contour of the tumor. Moreover, the texture features, which are extracted from the outlined tumor region, might also be affected by tumor segmentation errors. In this study, tumor outlining was performed by a radiologist with more than thirteen years of experience. Such manual outlining by an experienced operator has been employed in several previous studies, such as [10,15]. In fact, the manual outlining of the tumor is a time consuming task and its accuracy is subject to the experience level of the radiologist. The future direction of this work is to employ automatic tumor segmentation algorithms, such as [45], that employ advanced image processing techniques to achieve accurate and objective outlining of the tumors.
The multiple-ROI texture analysis approach employed in this study can be extended to reduce the effect of tumor outlining errors. In particular, for each ROI inside the computer-drawn outline, a well-trained classifier can be used to estimate the probability of belonging to the tumor or the surrounding healthy tissue. Such probability estimation can be used to weight the tumor class indicators obtained from the individual ROIs. A customized voting algorithm can be developed to combine the weighted tumor class indicators of the individual ROIs and estimate posterior tumor class likelihood.
Conclusion
In this study, an effective approach for BUS image classification is proposed. Texture analysis is carried out by dividing the tumor into a set of nonoverlapping ROIs and processing each ROI individually to estimate its tumor class indicator. The tumor class indicators of all ROIs inside the tumor are combined using a majority voting mechanism to estimate the posterior tumor class likelihood. In addition to the multiple-ROI texture analysis, morphological analysis is used to estimate the posterior tumor class likelihood. A probabilistic approach is employed to fuse the posterior tumor class likelihoods obtained using the texture and morphological analyses. The proposed approach has been employed to classify 110 BUS images. The classification results indicate that the proposed approach achieved classification performance that outperforms conventional texture and morphological analyses. In particular, fusing the multiple-ROI texture analysis and morphological analysis enabled classification accuracy of 98.2%, specificity of 98.4%, and sensitivity of 97.8%. These results suggest that the proposed | 7,326.2 | 2016-12-29T00:00:00.000 | [
"Computer Science"
] |
OMNI-CONV: Generalization of the Omnidirectional Distortion-Aware Convolutions
Omnidirectional images have drawn great research attention recently thanks to their great potential and performance in various computer vision tasks. However, processing such a type of image requires an adaptation to take into account spherical distortions. Therefore, it is not trivial to directly extend the conventional convolutional neural networks on omnidirectional images because CNNs were initially developed for perspective images. In this paper, we present a general method to adapt perspective convolutional networks to equirectangular images, forming a novel distortion-aware convolution. Our proposed solution can be regarded as a replacement for the existing convolutional network without requiring any additional training cost. To verify the generalization of our method, we conduct an analysis on three basic vision tasks, i.e., semantic segmentation, optical flow, and monocular depth. The experiments on both virtual and real outdoor scenarios show our adapted spherical models consistently outperform their counterparts.
Introduction
Omnidirectional optical cameras can effectively capture their environment in a single shot thanks to their ultra-wide field of view (FoV). As a result, many robotic applications are interested in using such a type of image that can provide rich information about the scene, especially helpful for obstacle avoidance. Various recent works have shown the great potential of omnidirectional images, such as [1,2] for simultaneous visual localization and mapping (SLAM) and, more recently, ref. [3] on deep reinforcement learning (DRL). These solutions have shown better performances than their counterparts based on conventional images with a limited FoV.
Recent learning-based methods have greatly advanced the research for various vision and robotic tasks. This can be mainly contributed to the fast development of a GPU but more importantly to a large number of labeled datasets. Nevertheless, most existing datasets are with perspective images, with few datasets collected by omnidirectional sensors. Indeed, building an accurate and complete dataset is labor intensive and time consuming. In addition, omnidirectional sensors capable of extracting the ground truth are rare, complex to calibrate, and often subject to reconstruction errors. There are several recent attempts to build benchmark spherical datasets, such as Matterport3D [4] and Standford-2D3D [5]. However, these works were built virtually and with indoor scenes. Even though we can train networks on these datasets, extending the application to real cases or outdoor scenes is not trivial. Hence, developing a novel method to adapt from the networks pretrained on perspective images is highly demanded for omnidirectional applications.
As suggested in [6], all spherical projections come with distortions. In particular, equirectangular images, commonly used for their easy readability and classical rectangular format, suffer from significant distortions in the polar regions. Because of this non-linearity, objects appear differently at different latitudes. To tackle this issue, several approaches propose to take into account spherical distortions by modifying traditional image processing methods. Nevertheless, these works suffer from the following drawbacks: • Learning-based methods. Several works [7][8][9] propose to train the network on omnidirectional datasets. However, as discussed in previous paragraphs, the existing spherical datasets are limited to indoor scenes with few images compared to the perspective benchmarks. • Adaptation-based methods. Several works add distortion awareness over the features from the latent space by using a specific mathematical formulation, such as the fast Fourier transform [10,11] or polyhedra [12]. Despite the elegance of these solutions, the adapted network needs to be trained from scratch with specific training datasets. In addition, the adaptation methods are very demanding in terms of the computational cost. Therefore, it is difficult to implement such methods on edge devices for real-time robotic applications.
To address the abovementioned dilemmas, in this paper, we propose to directly replace standard convolution operations with distortion-aware convolutions. Therefore, we can benefit from all the development on perspective images to boost the performance on various tasks with omnidirectional images. Technically, we modify the shape of each convolution kernel according to its latitude in the image. It is worth noting that the adapted convolution has demonstrated its effectiveness in perspective networks [8,[13][14][15][16][17][18]. Different from previous works [15][16][17] that dynamically learn the new kernel shape, we propose a distortion-aware convolution with our statically computed receptive field. One major advantage is that our method does not require additional training and can be directly implemented in any existing convolutional network pretrained with perspective images. The effect of spherical adaptation on optical flow estimation was proven in a previous publication [14]. Here, we extend this previous work using a state-of-the-art optical flow estimation network and generalize the demonstration to two commonly used visual modalities: semantic segmentation and monocular depth.
We compare our adapted network with its baseline version on complex and unstructured outdoor datasets. We also present a new equirectangular photorealistic forest dataset with ground-truth semantic segmentation and depth. Finally, we test our solution on real outdoor images taken with an omnidirectional camera. In all cases, the adapted networks outperform their non-spherical counterparts.
The structure of this paper is as follows. First, Section 2 presents the proposed spherical adaptation using distortion-aware convolutions. Then, a brief overview of the three visual modalities is proposed in Section 3, along with the presentation of the selected networks for the spherical adaptation. Finally, Section 4 provides the comparison results between the adapted models and their baselines on virtual and real outdoor equirectangular images.
Distortion-Aware Convolutions
The proposed spherical adaptation is based on distortion-aware convolutions. First, we present the mathematical model using a local perspective projection of the kernels on the sphere. Then, we describe the implementation and use of this adaptation on perspective networks.
Local Perspective Projection on the Sphere
The original adaptive convolution was initially presented by [16], where the authors proposed to learn the offsets in an end-to-end manner. More recent works exploit this idea by using fixed offsets. In [13], the authors show that the depth prior can be used to compute the adaptive kernel statically, leading to better awareness of the geometry. An adaptive convolution was also exploited in omnidirectional images [8]. The standard perspective kernel is modified to fit the equirectangular distortions. To build a kernel of resolution r and angular size α centered in a location p 00 = (u 00 , v 00 ) in the equirectangular image, the center coordinates are first transformed to spherical system p s,00 = (φ 00 , θ 00 ) using where W and H are, respectively, the width and the height of the equirectangular image in pixels. Each point of the kernel is defined bŷ where i and j are integers in the range − r−1 2 , r−1 2 and d is the distance from the center of the sphere to the kernel grid. In order to cover the field of view α, the distance is set to . The coordinates of these points are computed by normalizing and rotating them to align the kernel center on the sphere. Therefore, where R a (β) stands for the rotation matrix of an angle β around the a axis. These coordinates are transformed to latitude and longitude in the spherical domain using and finally back-projected to the original 2D equirectangular image In Figure 1, some example of kernels at different latitude and longitude are presented. The blue point defines the center of the kernel p 00 = (u 00 , v 00 ). The red points are the positions of the elements p ij = (u ij , v ij ) in the adapted kernel, defined as previously. The green points are the positions of elements in a standard perspective kernel given by:
Implementation in a Perspective Network
The distortion-aware convolution strategy does not require additional parameter learning. As a result, it avoids using large and complex spherical datasets for training. Figure 2 presents a schema of the general implementation process. Figure 2. General adaptation process: 1. The architecture and weights come directly from pretraining on perspective datasets. 2. The convolution layers are modified using a precomputed shift table that takes into account equirectangular distortions. 3. Finally, we directly use spherical images as input in the adapted model to predict the modalities for which the network was pretrained.
The overall architecture and weights of the network are derived from a model trained in a supervised manner using perspective images and ground-truth modalities. We directly reuse the code and pretrained weights provided by the models' authors. This highlights the simplicity of our solution integration into previously published work and ensures good performance fidelity to the original publication.
We replace the standard convolution layers with new layers handling the equirectangular distortions. In practice, the convolution operations are modified to add fixed offsets to each coordinate of the kernel points. These offsets are calculated using Equation (5), presented in Section 2.1. It only requires the input sizes and the convolution parameters. These offsets tables can be computed offline. As a result, there is no slowdown in the execution of the adapted network.
The proposed solution is compatible with every kernel, stride, or padding size. Therefore, this plugin can be implemented in any convolutional neural network architecture.
Tested Visual Modalities
Most of the latest computer vision methods are based on convolutional neural networks. To demonstrate the simplicity and versatility of our adaptation solution, we propose to implement it on several networks used for very different vision tasks. We have selected three commonly used vision tasks: semantic segmentation, depth, and optical flow. Each modality has very distinct requirements, which challenges the robustness of our solution. This section presents the three different visual modalities studied and the associated selected networks.
To highlight the generalization of our demonstration, we selected three models of very different sizes and accuracies: from the state-of-the-art heavyweight network to the ultra-lightweight architecture for resource-constrained devices.
Moreover, to demonstrate the efficiency and simplicity of our solution, we selected networks with pretrained weights provided by the models' authors on perspective datasets. This also guarantees good performance fidelity with respect to the initial publication.
Semantic Segmentation
Semantic segmentation is an essential task in robotic vision. It provides a dense understanding of the different locations and object categories present in the image with pixel-level accuracy. This offers abundant cues for upper-level navigation operations. Furthermore, thanks to omnidirectional cameras, a moving agent can obtain a holistic and precise understanding of its surroundings.
To estimate the semantic segmentation in outdoor images, we choose the solution published by the MIT Scene Parsing team [19]. They propose a classical encoder-decoder architecture trained on the ADE20K dataset. This dataset contains 20,000 mixed indoor and outdoor scenes with 150 semantic classes.
The chosen architecture uses the ResNet50 dilated version as the encoder and PPMdeepsup as the decoder.
The ADE20K dataset contains 150 different classes that are sometimes semantically close. Therefore, the semantic segmentation network identifies some objects in our test dataset from the same ground-truth class as two different categories. To regroup these predictions, we combine some closely related classes. The final tree class regroups trees, plants, and canopy classes. The ground class regroups ground, earth, path, dirt, mountain, and hill classes.
Optical Flow
Optical flow estimation methods aim to compute the apparent motion of pixels between two frames. It enables autonomous vehicles and robots to acquire temporal cues of the surrounding scenes. In a previous publication [14], we presented a method in omnidirectional images improved by spherical adaptation. In this paper, we generalize and update that earlier work. At that time, we implemented our solution on LiteFlowNet2 [20], one of the leading algorithms in 2020. Since then, optical flow methods have been improved, mainly thanks to Transformers networks for the pixel correlation operation. However, these networks do not use convolutional layers in their core but still rely on CNNs to extract low-level features from RGB inputs before processing them. This encoding is crucial for further image processing, and we propose to adapt it to take into account distortions in equirectangular images.
We select the solution GMFlow proposed by [21], which is currently one of the leading optical flow estimation algorithms.
Monocular Depth
Monocular depth estimation is an important computer vision task that aims to predict dense depth maps based on a single image. Thanks to its robustness to scene changes, this is the most commonly used visual modality for obstacle avoidance in navigation.
We select the MIDAS [22] network to test our solution. This supervised model presents a classical encoder-decoder architecture, mainly built to be embedded in resourceconstrained devices such as drones, resulting in one of the lightest depth estimation networks. In addition, this network is highly versatile thanks to its training on multiple indoor and outdoor perspective datasets. We specifically choose the midas_v21_small pretrained version of the network.
Results
This section compares the spherically adapted network and the baseline version. First, we provide a quantitative comparison of the virtual outdoor datasets. Then, we further investigate the differences using samples from the previously studied virtual datasets or real equirectangular images. For the latter part, we capture images of various outdoor scenes with an omnidirectional camera and analyze the differences.
Testing Datasets
Outdoor scenes are generally more challenging for networks than indoor scenes, mainly due to the diversity of lighting and the limited amount of outdoor images in the training datasets. However, the available outdoor omnidirectional datasets are very limited and do not include multiple visual modalities ground truths. In addition, forest images are often not used in perspective training datasets which further tests the robustness of the tested models. Therefore, forest scenes are an ideal environment to challenge the networks presented above.
For semantic segmentation and monocular depth estimation, we build a photorealistic forest environment RWFOREST. Unfortunately, the ground-truth extraction of the spherical optical flow is not yet available in this environment. As a result, we use two other datasets to test this visual modality: OmniFlowNet [14] and Flow360 [23]. Table 1 summarizes the different equirectangular datasets used to test the various adapted networks. A more detailed presentation of the different environments is provided in the sections below. Using the best rendering capabilities of Unreal Engine [24] and the forest textures from its marketplace, we create a photorealistic forest environment with complex lighting and dense foliage. We propose, in this paper, RWFOREST: a dataset of 1000 equirectangular RGB images with associated ground-truth depth and semantic segmentation provided by the AIRSIM [25] plugin. Three semantic classes are distinguished: trees, ground, and sky. The image resolution is 256 × 256 for all. Additional results on higher resolutions are provided in Appendix A. Figure 3 presents a sample of the RWFOREST dataset.
OmniFlowNet and Flow360 Datasets
The OmniFlowNet dataset, published in [14], features three different scenes called CartoonTree, Forest, and LowPolyModel. These sets are generated using Blender [26], with free 3D models available online. This dataset gathers 1200 equirectangular RGB images with an associated ground-truth optical flow. The Flow360 dataset, published by [23], proposes several urban driving scenes during different times of the day or weather. This dataset provides the ground-truth omnidirectional optical flow associated with RGB image sequences. Figure 4 shows a brief overview of the OmniFlowNet and Flow360 datasets.
Quantitative Comparison on Virtual Outdoor Datasets
To facilitate the reading of the results, we have grouped in Table 2 the comparison of the different visual modalities. To do so, we selected an error metric specific to each modality: we use the complement of the Mean Intersection Over Union (1 − MIoU) for the semantic segmentation, the End-Point Error for the optical flow, and the Relative Absolute Error for the monocular depth. We also offer additional comparison metrics in Appendix A. In addition, the definitions of all the metrics used are provided in Appendix B. The metric comparison reveals that adapted networks with distortion-aware convolutions always perform better than their counterparts from the baseline perspective. By observing this persistent improvement for all the modalities considered, we conclude that our proposed adaptation approach has excellent generalization capabilities. The improvement is consistent despite very different modality needs, network architectures, and training datasets. Moreover, the convolution operation modification appears robust. The gain in the error metric exceeds 3% on each modality except on the Flow360 dataset.
The lack of periodicity in the estimated optical flow can explain this Flow360 smaller gain. The spherical optical flow is periodic, but the network did not learn this information when learning the perspective. Thus, the estimation of the road pixel flow is still inaccurate. This lack of periodicity remains one of the limitations of this adaptation method for optical flow networks. However, modified convolutions still improve the predictions, especially in the case of single-object flow prediction, as shown in the following qualitative study.
We provide below a qualitative analysis to further explore the differences in the prediction between all the models considered.
Qualitative Comparison on Real and Virtual Outdoor Datasets
This qualitative comparison presents sample predictions of the three modalities studied on the proposed virtual outdoor datasets. We also compare real outdoor images taken with a RICOH THETA Z1. We looked for specific activations in the polar regions of the equirectangular images during the scene creation.
Semantic Segmentation
This section compares the semantic segmentation estimation differences between the adapted and baseline networks. Looking at the set of predictions made on the RWFOREST dataset, we notice two main improvements: better detection of the shapes in the polar regions and a less erroneous class estimation. Figure 5 shows the two prediction samples used to illustrate these results.
First, the spherical adaptation helps the network to take into account equirectangular distortions. The detection of shapes and objects is improved in highly distorted regions thanks to a better local coherence of the pixels. This effect is visible in sample 1, where the adapted network better identifies the tree canopy (upper polar region of the image).
In addition, the adaptation also reduces the number of noisy predictions. Some objects in the equirectangular images are highly distorted, resulting in false class predictions by the network. In sample 2, the upper polar region of the adapted version is less noisy and contains almost no false predictions.
We observe the same findings when estimating the semantic segmentation in real urban driving scenes. In our proposed example, Figure 6, we captured images when the car was passing under trees in order to focus on the tree canopy detection. The semantic segmentation predicted by the adapted network is more accurate than the baseline estimate, with better tree canopy identification and less noisy class predictions. This confirms that distortion-aware convolutions improve the semantic segmentation in virtual and real outdoor images.
A mask is added to the image's lower part to hide the car's semantic segmentation estimate. Indeed, the car's shape is strongly distorted due to its proximity to the omnidirectional camera. The absence of such images and nearby objects in the training dataset makes the network unable to make a correct prediction. Spherical adaptation improves the quality of semantic segmentation in spherical images but remains limited by the training dataset, as in all supervised methods.
MASK MASK
Baseline Adapted
Monocular Depth
Monocular depth prediction is more difficult to comment on than semantic segmentation because depth differences are less visible to the human eye. The visual results seem ambiguous, and it is challenging to decide which estimate is better than the other. Therefore, a more detailed quantitative comparison is provided in Table 3. Additional metrics are provided, all of which show that the depth prediction from the spherical adapted network is more accurate than the baseline version. In the appendix, Figure A1 shows the monocular depth estimation from the same RGB images used in the semantic segmentation prediction example.
For real image examples, we focus on predicting the distance of objects in the upper polar region during urban driving scenes. Figures 7 and 8 show two image acquisitions, the first as the car passes under a bridge and the second as it drives by a large tree. Similarly to the results on the virtual images and semantic segmentation examples, the detection of shapes is improved in the polar regions and there is less erroneous depth estimation. Sample 1 shows that the spherical adaptation improves the depth prediction in the polar regions of the equirectangular images. In the upper left of the image, the bridge depth estimation is more accurate and smoother due to better local pixel coherence. In addition, sample 2 shows that the adapted prediction is less sensitive to illumination noise. The image contrast in the top polar region shows significant differences due to the sun configuration. The baseline network interprets these changes as depth differences, while the adapted model is more robust and remains accurate.
Optical Flow
The optical flow enhancements are clearly visible as objects move into the polar regions of the equirectangular image. Figure 9 shows two optical flow estimates in the dataset Flow360. In both examples, the car passes under a streetlight. Due to the improved local pixel coherence provided by the distortion-aware convolutions, the adapted network is able to track the path of the streetlight in the upper polar region of the image. As a result, the estimated optical flow is close to the ground truth. In parallel, the non-adapted network has difficulty detecting this same streetlight. Consequently, the flow prediction is inaccurate in sample 1 or even empty in sample 2. For optical flow estimation in real images, we focus on the motion of a ball during a throw. Figure 10 shows two different image sequences with associated optical flow predictions. Due to better local pixel coherence, the adapted model keeps track of the ball and provides an accurate motion estimate. In contrast, the baseline network loses track of the ball, resulting in a noisy optical flow prediction without an apparent precise motion. This result confirms the improvement in the optical flow estimation in virtual and real images provided by distortion-aware convolutions.
Conclusions
This paper presents a generalization of the spherical adaptation of perspective methods to equirectangular images using distortion-aware convolutions. We have tested and proved the adaptation of three fundamental visual modalities in computer vision: semantic segmentation, optical flow, and monocular depth.
A state-of-the-art network was modified for each modality to take into account the spherical distortions with a simple and fast adaptation without architecture modification or additional training. When tested on virtual equirectangular outdoor images, the adapted version outperformed its baseline in all cases. Furthermore, regardless of the visual modality, the network estimations were improved in highly distorted regions. The predictions were smoother thanks to better local pixel coherence. Furthermore, there were less erroneous estimations. We observed the same results when applying these methods to real outdoor equirectangular images.
Therefore, although this solution does not compete with networks specializing in spherical images, it allows the simple and fast adaptation of any architecture. Further-more, this can easily overcome the lack of outdoor omnidirectional datasets. Finally, it allows us to keep up with the new architectures proposed regularly in deep learning for perspective images. For each comparison, the best results are in bold.
SAMPLE 1
Baseline Adapted Figure A1. Prediction examples in the RWFOREST dataset. The predicted depth images are visually challenging to compare. However, quantitative measurements have shown that the adapted version is numerically better than the baseline. Top left: RGB input, top right: ground-truth monocular depth, bottom left: prediction from the baseline network, bottom right: prediction from the adapted network. For each comparison, the best results are in bold. | 5,411 | 2023-01-28T00:00:00.000 | [
"Computer Science"
] |
Single objective light-sheet microscopy for high-speed whole-cell 3D super-resolution
: We have developed a method for performing light-sheet microscopy with a single high numerical aperture lens by integrating reflective side walls into a microfluidic chip. These 45° side walls generate light-sheet illumination by reflecting a vertical light-sheet into the focal plane of the objective. Light-sheet illumination of cells loaded in the channels increases image quality in diffraction limited imaging via reduction of out-of-focus background light. Single molecule super-resolution is also improved by the decreased background resulting in better localization precision and decreased photo-bleaching, leading to more accepted localizations overall and higher quality images. Moreover, 2D and 3D single molecule superresolution data can be acquired faster by taking advantage of the increased illumination intensities as compared to wide field, in the focused light-sheet.
Introduction
Single molecule super-resolution (SMSR) imaging is performed by localizing individual fluorophores within a densely labeled sample. SMSR in 3D throughout a mammalian cell is difficult because wide field activation and imaging light cannot be targeted to the in-focus image plane. This results in high background fluorescence leading to large localization errors [1,2]. To prevent high background fluorescence in SMSR imaging total internal reflection fluorescence (TIRF) illumination is often used to confine the excitation light to ≈300 nm above the coverslip, but this limits the application to imaging of targets close to the ventral plasma membrane. To overcome these limitations, we use a single objective along with a reflective surface to generate illumination only in the image focal plane. We will refer to this method as Single Objective Light-Sheet Microscopy (SO-LSM).
In traditional light-sheet microscopy (also termed selective/single plane illumination microscopy, SPIM) an objective placed perpendicular to the detection objective is used to generate a thin sheet of light that illuminates only the image focal plane [3]. This optical design requires a specialized microscope in which the illumination and detection objectives have to be placed close together. When high numerical aperture (NA) objectives are needed as is the case for SMSR imaging, this two objective design poses problems due to the short working distance and bulky nature of these objectives. Various attempts have been made to circumvent these limitations. Hu and coworkers proposed a method termed light-sheet Bayesian superresolution in which they use a prism-coupled condenser design to illuminate a thin slice [4].
In reflected light-sheet microscopy two opposing objectives are used together with a polished AFM cantilever which reflect the light-sheet formed by one objective at the focal plane of the other objective [5]. Recently, this approach was adapted to use commercially available microprisms attached to standard coverslips instead of the AFM cantilever [6]. Other approaches include Bessel beam plane illumination [7], individual molecule localization-selective plane illumination microscopy [8] and lattice light-sheet microscopy [9]. However, all these setups are complex and require two objectives that need to be critically aligned, which limits the applicability. Highly inclined and laminated optical sheet (HILO) (also termed variable-angle epi-fluorescence microscopy) uses only one objective, but the field of view is limited by the oblique illumination and the illumination intensity varies with z-depth [10,11]. Recently, Galland and coworkers used a single objective combined with a 45°reflective surface to produce SPIM 3D optical sectioning and termed their method single-objective selective-plane illumination microscopy (soSPIM) [12].
Here, we describe the use of a single objective, inverted epi-fluorescence microscope along with a reflective surface for illumination and detection ( Fig. 1), related to the approach taken by Galland et al. [12]. In our approach the reflective surface forms the side wall of a microfluidic channel incorporated into a microfluidic device. A light-sheet is generated through the objective and reflected by the mirror surface such that it illuminates only the in-focus plane of the cell (Fig. 1). The microfluidic device provides benefits such as a sealed environment and fast, automated buffer exchange which are beneficial for SMSR applications where buffer conditions are critical.
Layout of optical setup
The outline of the optical setup is shown in Fig. 2. The microscope's excitation source is a 642 nm diode laser (Thorlabs, HL6366DG) which is collimated by aspherical lens AL1 and a laser diode clean up filter is used to filter out undesirable laser light (Semrock, LD01-640/8-12.5). The laser is coupled into an optical fiber (P1-488PM-FC-2, 0.12 NA, Thorlabs) via two mirrors and aspherical lens AL2. The laser beam out of the fiber is collimated by an aspherical lens, AL3. For the wide field illumination light path, flip mount mirror FMM1 is flipped up and reflects the light onto mirror M5. The laser beam passes through a telescope system formed by aspherical lens AL4 and lens L3, which expand the beam into the desirable diameter. Flip mount mirror FMM2 is flipped down and mirror M6 reflects the light through the iris and TL1 onto the dichroic mirror (Semrock, Di02-R635-25x36) and into the objective (Olympus, UPLSAPO 60X W), which is placed vertically in an inverted microscope configuration. The fluorescence emission passes through the dichroic mirror DM and an emission filter EM (Semrock, FF01-708/75-25) before being focused by tube lens TL2 onto an sCMOS camera (Hamamatsu ORCA-Flash4.0 V2 Digital CMOS, C11440-22CU). A cylindrical lens CY3 is placed 46 mm before the camera chip to create astigmatism for 3D emitter localization. The cylindrical lens CY3 is removed for diffraction limited imaging and 2D super-resolution.
For light-sheet generation, flip mount mirror FMM1 is flipped down and the laser is reflected by mirrors M3 and M4. Figure 2(b) shows the light paths for the x-z and y-z plane with z being the dimension along the optical axis. In the x-z plane, the laser line is diverged by a cylindrical lens, CY2, and re-collimated by lens L1. When the beam is collimated after L1 the waist of the light-sheet will be positioned at the focal plane of the objective, however this is not the ideal position as it will diverge right after reflection by the channel wall, before reaching the cell. Therefore the position of CY2 is not fixed and can be adjusted along the optical axis to position the beam waist further away from the focal plane of the objective and within the cell after reflection (see section 2.2). A slit is used to control the x dimension of the laser line. After reflection by the galvanometric mirror GM, and the flip mount mirror FMM2, which is flipped up for light-sheet illumination, the laser line is focused in the x-z plane at mirror M6. Mirror M6 is placed at the conjugate focal plane of the sample plane, which is 180 mm from the tube lens, TL1. Therefore, the focused laser line at the M6 will be imaged at the sample plane through the tube lens TL1, the dichroic mirror DM, and the objective. In the y-z plane, the laser beam is expanded by the laser line generating lens LLLG (LaserLine Optics Canada, LOCP-8.9R01-1.0), and re-collimated by a cylindrical lens CY1. L1 focuses the laser in the y-z plane at the galvanometric mirror GM. After being reflected by the galvanometric mirror, the laser line is diverging in y-dimension, and is collimated by L2. After reflection by flip mount mirror FMM2, mirror M6 and passing through the iris the laser line is imaged by tube lens TL1 and the objective. The laser line is reflected by the side wall of a microfluidic channel and forms a light-sheet in x-y, across the focal plane of the objective.
The sample is placed in a sample holder (Thorlabs, MAX3SLH) which is mounted onto x and y manual translational stages (Thorlabs, PT1/M). The sample z-position is controlled by a combined manual/piezo translational stage (Thorlabs, NFL5DP20S/M). The piezo is coupled to a strain gauge reader for feedback and operated in closed loop control. For z-scanning the piezo is used to scan the sample in z and the galvanometric mirror is used to position the light-sheet at the correct x-position such that it is reflected at the focal plane of the objective.
Positioning the beam waist at the cell
In the ideal situation, the entire length of the light sheet (confocal parameter) should be centered on the cell, so that the thinnest part of the light-sheet spans across the cell area. Therefore, the beam waist should be moved further away from the objective. To achieve this, the incoming beam should not be collimated and the beam curvature at the back aperture of objective lens should not be infinite, but should have a finite value. In Fig. 3, R 1 is the beam curvature at the back aperture of the objective. Given that R 1 is approximately tens of millimeters large it can be approximated to the object distance of the intermediate virtual object of the beam waist inside the sample medium. In order to move the beam waist away from the objective, the best option Fig. 2. I 1 is the virtual object of I 2 , I 2 is the real object of I 3 , which is the beam waist, and I 4 is both the virtual image of I 2 and the virtual object of I 3 . If I 3 is at the designed focal position of the objective lens, S 1 is equal to the focal length of L 1 (S 1 = f 1 ). D 1 , D 2 and D 3 are fixed distances, which are equal to f 1 + f 2 , f 2 + f 3 and f 3 , respectively (D 3 is approximately equal to f 3 , because the focal length of the objective lens is usually much smaller than the focal length of the tube lens TL1). In the bottom drawing, CY2 is moved by a distance d 1 , and thereby the beam waist I 3 is moved away from the objective lens.
is to move the second cylindrical lens (CY2, Fig. 2), which will affect only the optical path in the x-z dimension. Figure 3 shows the excitation beam path in the x-z dimension starting from CY2. As shown in the top panel, if I 3 is at the designed focus of the objective, its virtual object I 4 will be at infinity, and R 1 ≈ ∞, S 1 = f 1 , in which f 1 is the focal length of lens L1. By moving CY2 a distance of d 1 (bottom panel), I 4 is moved towards the objective, and R 1 gets a finite value. The derivation of R 1 from d 1 is given below. Lens L1 and L2 form a telescopic system, with an axial magnification of ( f 2 / f 1 ) 2 , in which f 2 is the focal length of L2. Thus S 2 , which is originally equal to f 2 , can be calculated from since D 2 is fixed and equal to f 2 + f 3 , S 3 is given by: although TL1 and the objective also form a telescope like system, its lateral magnification does not agree with the result from a telescope system, because the objective is a compound lens system, which cannot be simplified by a thin lens. The image formation of TL1 is described by: then: combining Eqs. (1) to (4) Thus, z f can be related with d 1 by Eq. (5) and CY2 can be moved towards the objective in order to move the beam waist away from the objective and the channel wall and into the cell.
Light-sheet dimensions
We measured the dimensions of the light-sheet ( Fig. 4(a)) by moving the tube lens with respect to the camera and imaging an Alexa 647 coated coverslip ( Fig. 4(b,c), see section 4.2 for details). For a 2 mm slit opening, which is approximately the x-dimension of the beam at the slit position, we found the thickness (ω 0 ) of the light-sheet to be 1.1 µm and the confocal parameter (range over which the thickness is smaller than √ 2 × ω 0 ) to be 11.9 µm ( Fig. 4(d)), thereby matching the dimensions of a single cell. Increasing the slit opening didn't change the beam profile (data not shown). We can tune the light-sheet dimensions by closing the slit, as is shown for a 1 mm slit opening (Fig. 4(d)), which increases the thickness to 1.6 µm and the confocal parameter to be 32.3 µm. This allows for imaging of larger cells by compromising the light-sheet thickness and intensity. The size of the light-sheet in the y-dimension (width) is 27 µm, which results in a size of the field of view of 11.9 × 27 µm for a 2 mm slit opening and 32.3 × 27 µm for a 1 mm slit opening. To increase the field of view for larger cells the y-dimension of the beam could be tuned by changing the fan angle of the laser line generating lens.
Microfluidics chip design and fabrication
Microfluidic chips with integrated reflective surfaces were produced using bulkmicromachining of silicon. Several methods currently exist to fabricate and optimize 45°angled sidewalls in silicon [13,14], and for this effort we have utilized the potassium hydroxide (KO-H) with organic surfactant approach [15]. The schematic of the device fabrication approach is shown in Fig. 5(a). Silicon wafers with (1 0 0) orientation were cleaned and then a 1 µm thermal oxide layer was grown on the substrates. Photoresist was spun on wafers, and channel features of varying width (50 µm to 300 µm) were created on the substrates using photolithography. The features were then transferred into the oxide with a reactive ion etch. Channels with 45°angled sidewalls were then etched into the silicon substrate using a KOH solution with TritonX-100 detergent added to improve the smoothness of the etched sidewalls [15]. Since the KOH etch chemistry is selective for silicon and does not etch the silicon dioxide, the oxide mask is undercut during the etch process leaving oxide overhangs ( Fig. 5(a(iv),c)). After the wet etch process, the substrates were rinsed thoroughly, and subjected to a buffered hydrofluoric acid (HF) etch to remove the oxide mask. Using a contact profilometer, the sidewall angle for etched microchannels was determined to be 42.9 ± 0.63°for a microchannel depth of 41.3 ± 1.7 µm (n=20). As expected from an anisotropic etch determined by the silicon crystal orientation, SEM imaging indicated that the microchannel sidewalls were qualitatively flat over the length of the sidewall with some curvature at the crystal plane intersections at the top and bottom of the channels (Fig. 5(d)). Atomic force microscopy (AFM) was used to examine the surface roughness of the sidewalls (R rms = 4.14 nm) in comparison to the roughness of the top of the silicon wafer that was previously under the oxide mask (R rms = 1.75 nm). The top surface of the channel has increased surface roughness due to the difference in the etching plane of the bottom surface (1 0 0) to the sidewalls (1 1 0). After another batch of thorough rinsing, substrates were placed into an evaporator and coated with a thin layer of aluminum metal. Aluminum was chosen due to its high reflectivity in the optical wavelength range compared to other metals. Figure 5(d) shows a channel with 45°angled sidewalls as measured with scanning electron microscopy (SEM) and contact profilometry. After wafers were coated with aluminum, they were sawed into chips and anodically-bonded to pyrex coverslips to enclose the microchannels.
To create a reliable fluidic and optical interface with the microfluidic chip it was housed in a plastic laminate package. The package was made from five layers of adhesive coated and uncoated plastic sheets ( Fig. 6(a)). Each sheet was laser cut and then laminated together to form a three dimensional mechanical housing to which the microfluidic chip was attached ( Fig. 6(b)). The upper four layers were comprised of 2.0 mm polymethylmethacrylate (PMMA, Astra Products), 0.10 mm acrylic adhesive tape (90445, Adhesives Research), 2.0 mm PMMA, and 0.10 mm 90445 tape. These layers house silicon O-rings (dash 001, McMaster-Carr) which received and supported PEEK or PTFE tubing (OD, 1/32 inch) used to supply buffers, gel, sample, and cleaning reagents to each of the channels individually. The lower three layers were comprised of 0.5 mm PMMA (Astra Products), 0.10 mm 90445 tape, and 0.20 mm PMMA (Astra Products). The functionality of the PMMA layers was mechanical support and alignment of the microfluidic chip. The tape layer provided a channel for the gel to be loaded in the mirrored micro-channel without interfering with subsequent introductions of fluids (Fig. 6(c)). The bottom side of the adhesive was also used to adhere the microfluidic chip (silicon side) to the package. The final arrangement of the package resulted in the imaging-side of the microfluidic chip oriented opposite of the fluidic tubing interface (Fig. 6(d,e)), which package was then secured into the microscope setup.
Characterization of microfluidic channels with 45°mirror side walls
SEM was performed to image the channels after fabrication, using a Hitachi S4800 microscope. AFM was performed to characterize the surface roughness of the reflective surfaces. A Dimension 5000 microscope was used in tapping mode, and the raw data was modified by leveling to order 1.
Measurement of light-sheet dimensions
The profile of a focused Gaussian beam along the optical axis is described by: where ω z is the beam radius at which the field intensity drops to 1/e 2 of the axial value, ω 0 is the beam radius at position z 0 which is the minimum value of ω z and known as the beam waist ( Fig. 4(a)). To measure the dimensions of the light-sheet coming out of the objective an 8-well chamber slide (LabTek) was coated with AlexaFluor 647 and positioned above the objective. In order to image multiple focal planes above the objective without changing the light-sheet, the tube lens was moved to and from the camera (Fig. 4(b)). For each position of the tube lens the piezo stage was used to move the fluorescent slide into the focal plane and 100 images of the beam profile illuminating the slide were recorded for 3 slit apertures. The z-position of the piezo was recorded with the images and the position of the tube lens was marked. Next, the tube lens was moved to another position and the procedure was repeated. After images were acquired for each position of the tube lens, the fluorescent slide was exchanged for a calibration grid with known dimensions (Thorlabs, R1L3S3P). For each position of the tube lens the grid was imaged to determine the magnification of the system with the tube lens at that position. The y-projection of the mean of the 100 images was corrected for the measured magnification and fitted to a Gaussian function to determine the beam width (Fig. 4(c)). Next, the beam width was plotted against the z-position (as recorded by the piezo) for each slit aperture width and the resulting curves were fitted to Eq. (6) (Fig. 4(d)).
Fixation and labeling
All labeling and washing steps were carried out at room temperature while rotating unless stated otherwise. Cells were harvested by trypsin and suspension cells were fixed in 4 % paraformaldehyde in PBS for 2 h. Cells were washed once with PBS and were blocked and permeabilized in PBS + 5 % BSA + 0.05 % Triton X-100 for 15 min to 30 min and stored at 4 • C in PBS + 2 % BSA + 0.05 % NaN 3 . Cells were labeled with primary antibody in PBS + 2 % BSA + 0.05 % Triton X-100 for 2 h at room temperature followed by extensive washing in PBS + 2 % BSA.
Diffraction limited imaging
For diffraction limited imaging, fixed and labeled cells were loaded into Poly-L-lysine (PLL) coated channels in PBS by a syringe pump and were left to settle for 1 h before the PBS was replaced by imaging buffer. Imaging was performed in an imaging buffer with enzymatic oxygen scavenging system: 50 mM Tris, 10 mM NaCl, 10 % w/v glucose, 168.8 U mL −1 glucose oxidase (Sigma #G2133), 1404 U mL −1 catalase (Sigma #C9332). Diffraction limited images were acquired by taking 15 frames at each z-plane, with planes being 250 nm apart. The exposure time was 20 ms frame −1 . In Fig. 7 images are the background corrected mean of all images taken at one z-plane. Line profiles and orthogonal projections were calculated from these mean images using Fiji [17].
SM-SR imaging
Cells were imaged in standard dSTORM imaging buffer [18] with enzymatic oxygen scavenging system supplemented with 50 mM 2-aminoethanethiol (MEA), pH 8.5. For SMSR imaging suspension cells were imaged in a 1 % low melting temperature agarose (A9045, Sigma Aldrich) gel. The gel was prepared by dissolving agarose in PBS using microwave heating. Aliquots of 4 % agarose solutions were stored at 4 • C until use. Agarose aliquots were heated to 80 • C to melt and cooled to 37 • C before mixing with cells. Cells were washed once in dSTOR-M buffer before mixing with the agarose. The agarose-cell mixture was diluted in dSTORM buffer to get a 1 % agarose concentration. Before cell loading the microfluidic chip was filled with dSTORM buffer through the buffer inlet. The agarose-cell mixture was loaded through the gel inlet port using a manual syringe while the channel was imaged close to the outlet port to assure the whole channel was filled. Next, the channel was cooled on ice for 5 min to let the agarose gel. The chip was assembled on the microscope stage and left to equilibrate to room temperature for 15 min before perfusion of dSTORM buffer was started and imaging was performed. dSTORM buffer was flown through the gel at 10 nL min −1 during image acquisition. For wide field illumination, cells were not imaged in the microfluidic chips because reflections from the mirror walls and top of the channel created high background. Instead cells were imaged in an 8-well chamber. To mimic the imaging conditions in the channel the cells were also imaged in a 1 % agarose gel.
For 2D SMSR imaging a single plane was imaged for 200 sequences of 2000 frames. For 3D whole-cell imaging cells were imaged from bottom to top, 2000 frames were acquired at each z-plane and planes were spaced 250 nm apart. The scan was repeated throughout the wholecell twelve times resulting in 24 000 frames for each z-position. For wide field and light-sheet illumination the acquisition procedure was the same.
Super-resolution image analysis and reconstruction
For both 2D and 3D super-resolution raw data were converted to photon counts by subtracting a pixel specific offset and multiplying by a pixel specific gain factor as described for sCMOS cameras [19]. We accounted for the pixel dependent read noise in our fitting algorithms as described earlier [19].
For 2D super-resolution imaging single emitters in each frame are identified as described [20]. The method performs two filtering steps to reduce the Poisson noise and smooth out the data, then finds the pixel coordinates of local maxima and uses these as the centers of fitting regions. Each fitting region measured 7 × 7 pixels and all fitting regions were fed into the 2D localization algorithm that maximize the likelihood function using a Newton-Raphson method to iteratively update the fitting parameters, which includes the x and y positions, the total photon count, the background photon counts and the size (σ ) of the PSF. The localized emitters were filtered through thresholds on localization precision calculated from the Cramer-Rao Lower Bound (CRLB) and the p-value. The accepted emitters were used to reconstruct the SR image. Each emitter is represented by a 2D-Gaussian with (σ x ) and (σ y ) equal to the smallest of the localization precision in x and y.
For 3D whole-cell imaging we used a scale space filtering approach to reduce noise and enhance individual emitter signal. The scale space consists of five difference of Gaussians filters with varying x and y dimensions to match in and out of focus shapes of individual emitters. Pixel coordinates of local maxima in the scale space are intensity thresholded using an automated threshold selection method [21] and are used as centers of fitting regions. The fitting region size is 8 × 8 pixels (1.22 µm) and a phase retrieved PSF model is used in a 3D localization algorithm based on maximum likelihood estimation (MLE) [1,22] using a Poisson noise model. All the fitting regions were fed into the 3D localization algorithm that maximizes the likelihood function using a Newton-Raphson method to iteratively update the fitting parameters, which includes the x, y and z positions, intensity, and background photon counts. The localized emitters were filtered by thresholding the intensity, background, p-value and CRLB on σ x , σ y and σ z of the estimate. Localizations were frame connected as described earlier [22]. The accepted emitters were used to reconstruct SR images in which every emitter was plotted as a 3D Gaussian blob with σ x and σ y equal to the minimum of the two localization precisions based on the CRLB. The cylindrical lens which is used to create astigmatism of the PSF also alters the magnification in the y-dimension. This results in different pixel sizes in the x and y-dimension, for which we corrected in the image reconstructions.
Computation
All data analysis was performed in Matlab (version 2014a and version 2015a, The Mathworks) with the DIPimage toolbox (http://www.diplib.org/) unless stated otherwise. Position estimation was performed using c-language and NVIDIA CUDA that were compiled to Matlab Mex files [22].
Results
To demonstrate the decrease in fluorescence background of single cells we imaged fixed RBL cells labeled for TOM-20 in the microfluidic channels with wide field and light-sheet illumination ( Fig. 7(a,b)). Direct comparison of the same cell imaged with wide field and light-sheet illumination clearly demonstrated the reduced background and increased contrast of the mitochondria, highlighted by the intensity profiles through individual mitochondria (Fig. 7(c)).
To investigate the advantages of light-sheet illumination on SMSR imaging we performed 2D dSTORM imaging of mitochondria in HeLa cells. We compared wide field and light-sheet illumination. The 45°mirror surfaces don't extend into the coverslip, therefore the reflection of the light-sheet is not optimal at the first few µm above the coverslip. To prevent the cells from touching the coverslip we imaged them in agarose gel. For wide field illumination we chose to image cells in 8-well chamber dishes in a 1 % agarose gel to prevent increasing the background due to reflections of the chip side walls and top. Figure 8(a,b) shows single cell images obtained with wide field (a) and light-sheet (b) illumination. With wide-field illumination detection and localization of individual emitters is difficult in fluorophore-dense areas of the cell due to illumination of out-of-focus emitters. This results in poorer reconstruction of mitochondrial structure. Moreover, since the light-sheet focuses the illumination in one dimension the illumination intensity is increased. Higher illumination intensities lead to faster blinking kinetics [19, 23], therefore we could image 2.5 times faster with light-sheet than with wide field illumination (2 ms frame −1 vs 5 ms frame −1 ) without decreasing the number of photons per emitter (Fig. 8(c)). The total acquisition times for these images consisting of 150 sequences of 2000 frames were 35 min for wide field illumination and 16 min for light-sheet illumination. Also, quantification of the number of background photons per pixel clearly demonstrates a reduction in background for light-sheet illumination (Fig. 8(d)). This is also reflected in the SNR which is higher for light-sheet than wide field illumination (Fig. 8(e)). This results in a better localization precision, reported as the CRLB on σ x in Fig. 8(f), is better for light-sheet than for wide field illumination.
To use the SO-LSM setup for whole-cell 3D dSTORM imaging we placed a cylindrical lens in the emission light path to localize individual emitters in 3D. We imaged HeLa cells that were fixed in suspension and immunostained for TOM-20 in the microfluidic chips with light-sheet illumination (2 ms frame −1 ) and in 8-well chambers with wide field illumination (5 ms frame −1 ) as described for 2D SMSR. Raw image frames ( Fig. 9(a,b) a higher background signal for wide field vs light-sheet illumination. Quantification of single emitter statistics shows that while the number of collected photons (Fig. 9(c)) is the same, the per pixel background ( Fig. 9(d)) is reduced for light-sheet illumination. This results in a better SNR ( Fig. 9(e)) and a slightly improved localization accuracy ( Fig. 9(f)) for the z-position of individual emitters. There could be several reasons why the difference in SNR doesn't result in an large improvement of the localization accuracy. First, the reported localization accuracy is reported only for accepted emitters. Thresholding of emitters is done based on intensity, background, localization accuracy and p-value of the fit. Therefore, only good localizations will be represented in the plot. Indeed, when we compare all localizations without thresholding and the advantage of light-sheet illumination was more pronounced (data not shown). Moreover, if we compare the acceptance rates of localizations we find that these are higher for light-sheet illumination than for wide field (28 % vs. 25 % respectively). Second, the high background in wide field illumination might prevent low intensity emitters from being detected by our box finding method resulting a bias towards better localization accuracies. However, this will results in fewer detected emitters, thereby decreasing final emitter density and image quality. This effect will be stronger in 3D SMSR than in 2D SMSR, because the 3D PSF is larger due to the astigmatism, thereby decreasing the pixel intensities for single emitters, but not for out-of-focus background.
To investigate the advantage of increased intensity in light-sheet illumination we analyzed the data acquired for Fig. 9 for the same acquisition times. 3D reconstructions are shown for data acquired in 1, 3 and 5 minutes (Fig. 10). For all three time points the images demonstrate better reconstructions for light-sheet than for wide field illumination. After 1 minute the lightsheet image already shows well defined mitochondria whereas hardly anything is visible with wide field illumination. After 3 and 5 minutes mitochondrial membranes are better defined for light-sheet than for wide field illumination. This clearly demonstrates the advantage of higher illumination intensities by light-sheet illumination in decreasing the acquisition time needed for good reconstructed images. Next, we used imaged complete, top-to-bottom TOM-20 labeled HeLa cells with widefield and light-sheet generation. Figure 11(a,b) shows x-y whole-cell projections of the reconstructed images which are color-coded for z-depth. Figure 11(c-f) shows x-y projections of 2 µm thick slices in which individual mitochondria can be clearly resolved. One major advantage of lightsheet illumination for whole-cell 3D imaging is that there is no photo-bleaching of out-of-focus fluorophores. This is clearly demonstrated by the slower decrease in number of accepted localizations per repeat cycle (single scan through whole-cell) (Fig. 12) for light-sheet compared to wide field illumination. This results in an overall higher number of accepted emitters which contributes to a better image quality and resolution. This is clearly shown in the zoomed in regions of single mitochondria in Fig. 11(c-f) which are much better defined for the light-sheet illuminated cell (Fig. 11(d,f)) than for the wide field illuminated cell (Fig. 11(c,e)).
Discussion
For SMSR imaging epi-fluorescence microscopes are the currently the standard. TIRF illumination is often necessary to prevent high background from out-of-focus fluorphores, however this limits the application of SMSR to the ventral membrane of cell. This limitation provided the motivation to develop a simple device that can be used on an epi-fluorescence microscope to improve SMSR imaging by providing light-sheet illumination of the sample. To implement SO-LSM on an existing epi-fluorescence microscope a side-port could be used to incorporate a light-sheet generating light-path. The microfluidic chip with 45°mirror surfaces can be packaged in any shape plastic packaging to fit on any microscope stage. For cell and buffer loading regular syringes with or without a syringe pump can be used. This makes SO-LSM a cheap and simple alternative to more complicated and expensive optical systems for whole-cell 3D super-resolution imaging such as structured illumination [24] or lattice light-sheet microscopy [9]. Furthermore SMSR imaging is sped up by the increased laser intensity from the focusing of the light-sheet which overcomes the need to use more powerful and expensive lasers. This benefit also appears in HILO microscopy [11], however the sheet intensity in HILO varies with z-depth within the sample, whereas with SO-LSM the intensity gain is the same at each z-position.
We showed that SO-LSM improves the image quality in diffraction limited imaging. The microfluidic chip with 45°mirror surfaces can therefore also be used for live-cell imaging. Both single plane as well as whole-cell imaging will benefit from light-sheet illumination due to reduced background and decreased photo-bleaching and photo-toxicity. We showed that we can incorporate cells into a gel, which can be perfused on chip. This opens up the possibility to image live cells in a 3D environment created by a gel or matrix with high speed and good image quality.
Conclusion
In conclusion, we have shown that a microfluidic chip with incorporated 45°mirror surfaces can be used for light-sheet illumination with a single objective. This system provides decreased
Wide field
Light-sheet Fig. 11. SO-LSM improves whole-cell 3D super-resolution microscopy. HeLa cells were labeled for TOM-20 to visualize mitochondria and imaged using the 3D SMSR method. Cells were imaged from bottom to top and sequences of 2000 frames were acquired at z-planes spaced 250 nm apart. The scan through the cell was repeated 12 times. x-y projections of reconstructed images with color coded z-depth are shown for cells imaged using wide field (a,c,e) and light-sheet illumination (b,d,f). Zoom of single mitochondria are shown from the white boxed regions. Z-position of emitters is indicated by color coding. Image are reconstructed from whole-cell data (a,b) and 2 µm thick slices (c,d,e,f). LS = light-sheet, WF = wide field, scale bars 1 µm background in diffraction limited imaging. We demonstrated that using the SO-LSM approach we achieve a large background reduction in SMSR resulting in a higher SNR and a better localization accuracy. Moreover, photobleaching is significantly reduced in 3D whole-cell imaging, resulting in more localizations and improved reconstructed image quality. And due to the increased intensity in the light-sheet we can speed up the acquisition up to 2.5 fold. SO-LSM is easily applicable on any common inverted epi-fluorescence microscope in which a side-port can be used for light-sheet generation. Therefore SO-LSM provides a simple method to greatly improve whole-cell 3D super-resolution microscopy. | 8,059.8 | 2016-06-01T00:00:00.000 | [
"Physics"
] |
Inhibiting YAP expression suppresses pancreatic cancer progression by disrupting tumor-stromal interactions
Background Hippo/YAP pathway is known to be important for development, growth and organogenesis, and dysregulation of this pathway leads to tumor progression. We and others find that YAP is up-regulated in pancreatic ductal adenocarcinoma (PDAC) and associated with worse prognosis of patients. Activated pancreatic stellate cells (PSCs) forming the components of microenvironment that enhance pancreatic cancer cells (PCs) invasiveness and malignance. However, the role and mechanism of YAP in PDAC tumor-stromal interaction is largely unknown. Methods The expression of YAP in Pancreatic cancer cell lines and PDAC samples was examined by Western blot and IHC. The biological role of YAP on cancer cell proliferation, epithelial-mesenchymal transition (EMT) and invasion were evaluated by MTT, Quantitative real-time PCR analysis, Western blot analysis and invasion assay. The effect of YAP on PSC activation was evaluated by PC-PSC co-culture conditions and xenograft PDAC mouse model. Results Firstly, knockdown of YAP inhibits PDAC cell proliferation and invasion in vitro. In addition, YAP modulates the PC and PSC interaction via reducing the production of connective tissue growth factor (CTGF) from PCs, inhibits paracrine-mediated PSC activation under PC-PSC co-culture conditions and in turn disrupts TGF-β1-mediated tumor-stromal interactions. Lastly, inhibiting YAP expression prevents tumor growth and suppresses desmoplastic reaction in vivo. Conclusions These results demonstrate that YAP contributes to the proliferation and invasion of PC and the activation of PSC via tumor-stromal interactions and that targeting YAP may be a promising therapeutic strategy for PDAC treatment. Electronic supplementary material The online version of this article (10.1186/s13046-018-0740-4) contains supplementary material, which is available to authorized users.
Background
Pancreatic ductal adenocarcinoma (PDAC) is the fourth most common cause of cancer-related death in the USA, with a 5-year survival rate of less than 7% and a median survival time of less than 6 months. There were an estimated 53,070 new diagnoses and 41,780 deaths from pancreatic cancer in the United States in 2015 [1]. Pancreatic cancer is a highly invasive and metastatic cancer, and these characteristics are primarily responsible for treatment failure and the poor clinical prognosis. Pancreatic tumors are surrounded by a dense desmoplastic stroma [2] that consists of pancreatic stellate cells (PSC), immune cells, lymphatic and vascular endothelial cells, pathologically increased nerves and extracellular matrix (ECM), which create a complex tumor microenvironment that promotes pancreatic cancer development, invasion, metastasis and resistance to chemotherapy [3].
PSCs are the major cellular contributors to the desmoplastic reaction in PDAC and are thought to play an important role in the pathobiology of pancreatitis and pancreatic cancer [4]. Under normal conditions, PSC are maintained in a "quiescent" state; however, they can switch to a more "immature" phenotype that is characterized by a tendency to synthesize certain biologically active molecules (such as matrix metalloproteinase (MMP)-2, MMP-9, and transforming growth factor (TGF)-β1) in response to various stimuli [5]. PSC are activated by direct contact with pancreatic cancer cells (PCs) or paracrine cytokines produced by PCs, including sonic Hedgehog (SHH), connective tissue growth factor (CTGF), TGF-β1, and fibroblast growth factor (FGF) [4,6]. In turn, PSCs can also act on PC and inhibit the apoptosis of PC, induce epithelialmesenchymal transition (EMT), and promote stem celllike phenotypes in pancreatic cancer cells, resulting in resistance to chemotherapy, distant metastasis and poor prognosis in patients with pancreatic cancer [7][8][9]. However, the detailed molecular mechanisms underlying the activation of PSCs in pancreatic cancer and the desmoplastic reaction induction of tumor cell proliferation are still unclear. Therefore, understanding the molecular mechanisms that control tumor growth and the desmoplastic reaction in PDAC is important.
EMT is a process defining the progression that cells lose their polarized epithelial character and acquire a migratory mesenchymal phenotype. Consequences of the EMT are the loss of E-cadherin expression and the acquisition of mesenchymal markers including fibronectin or Vimentin [10]. EMT is regulated by a complex network of cytokines, transcription factors, growth factors, signaling pathways, and the tumor microenvironment, exhibiting CSC-like properties. The transition of solid cancer cells from an epithelial to a mesenchymal phenotype enables the cancer cells to gain migratory and invasive properties consequently lead to tumor metastasis and cancer stem cell property [11].
The Hippo/YAP pathway was first discovered by genetic mosaic screens in Drosophila melanogaster [12,13], and since then, increasing evidence has demonstrated that the Hippo pathway also limits organ size in mammalian systems [14,15] by inhibiting cell proliferation and promoting apoptosis. YES-associated protein (YAP), a main component of the Hippo pathway, has been confirmed to be overexpressed and to participate in the tumorigenesis of a variety of cancers, including breast cancer [16], lung cancer [17], ovarian cancer [18], and liver cancer [19]. Previous studies have demonstrated that YAP-mediated molecular mechanisms in tumors include proliferation and apoptosis through interactions with proteins such as glypican-3 and sox4 as well as the secretion of proteins (such as CTGF and osteopontin) [20,21], indicating that YAP not only regulates autonomous processes in tumor cells but also may affect the tumor microenvironment. However, little is known regarding YAP expression and its relevance to pathological fibrosis in PDAC.
In this study, we aimed to determine the expression and function of YAP in PDAC and evaluate the relationship between YAP and the desmoplastic reaction in PDAC as well as the underlying molecular mechanisms. Taken together, these results provide additional evidence that YAP contributes to pancreatic cancer progression.
Human tissue specimens and histological analyses
We obtained 72 pancreatic cancer samples and 20 normal pancreatic tissues from the Department of Hepatobiliary Surgery, the First Affiliated Hospital of Xi'an Jiaotong University between 2010 and 2014 after receiving approval from the Ethical Committee of Xi'an Jiaotong University. The pathological TNM status was assessed according to the criteria of the sixth edition of the TNM classification of the American Joint Commission on Cancer (AJCC). The pathological factors were examined by two pathologists. The results are summarized in Table 1. Immunohistochemical staining was performed using a SABC kit (Maxim, Fuzhou, China) according to the manufacturer's instructions. Briefly, the tissue sections were incubated with primary antibodies overnight at 4°C and incubated with the appropriate biotinylated secondary antibody for 30 min at room temperature, followed by 30 min of incubation with streptavidin peroxidase (Dako LSAB+HRP kit). After rinsing, the results were visualized using DAB, and the slides were counterstained with hematoxylin. The staining results were scored by 2 pathologists blinded to the clinical data as described previously [22]. The YAP staining status was evaluated according both nucleus and cytoplasm expression. Depending on the percentage of positive cells and staining intensity, YAP staining was classified into four groups: negative (0), weak (1+), moderate (2+) and strong (3+). Specifically, the percentage of positive cells was divided into five grades (percentage scores): < 10% (0), 10-25% (1), 25-50% (2), 50-75% (3), and 75% (4). The intensity of staining was divided into four grades (intensity scores): no staining (0), light brown (1), brown (2), and dark brown (3). YAP staining positivity was determined by the formula: overall scores = percentage score × intensity score. The overall score of ≤3 was defined as negative (0), of > 3 and ≤ 6 as weak (1+); of > 6 and ≤ 9 as moderate (2+), and of > 9 as strong (3+).
Cell lines, culture conditions and reagents
Human pancreatic cancer cell lines (AsPC-1, BxPC-3, CFPAC-1, Panc-1, and SW-1990) were purchased from the Chinese Academy of Sciences Cell Bank of Type Culture Collection (CBTCCCAS, Shanghai, China). According to the instructions, all cell lines were cultured in the proper medium (HyClone, Logan, USA) supplemented with 10% fetal bovine serum (FBS), 100 U/ml penicillin and 100 μg/ml streptomycin. The cultures were incubated at 37°C in a humidified atmosphere containing 5% CO 2 . Recombinant human TGF-β1 was purchased from PeproTech (Rocky Hill, USA). Detailed information regarding the antibodies used in this study is presented in Additional file 1: Table S1. All reagents were stored as recommended by the manufacturer.
Genetically engineered transgenic mice
Pdx1-Cre mice, LSL-Kras G12D mice and Trp53 fl/fl mice were purchased from the Nanjing Biomedical Research Institute of Nanjing University, Nanjing, China. The breeding of LSL-Kras G12D/+ ; Pdx1-Cre (KC) transgenic mice was achieved by crossing LSL-Kras G12D mice with Pdx1-Cre mice. LSL-Kras G12D/+ ; Trp53 fl/+ ; Pdx1-Cre (KPC) mice were obtained by firstly crossing Trp53 fl/fl mice with Pdx1-Cre mice to generate Trp53 fl/fl ; Pdx1-Cre offspring. Trp53 fl/fl ; Pdx1-Cre mice were then crossed with LSL-Kras G12D mice to generate KPC animals. All mice were housed under pathogen-free conditions and with free access to water and food. All experimental protocols were approved by the Ethical Committee of the First Affiliated Hospital of Medical College, Xi'an Jiaotong University, Xi'an, China.
Stable YAP shRNA lentiviral transfection YAP shRNA (shYAP) and negative control shRNA (shNC) in eukaryotic GV248 lentiviral vectors were purchased from GeneChem Co, Ltd. (Shanghai, China). The target sequence for YAP shRNA was CACCAAGCTA-GATAAAGAA, and the negative control sequence was TTCTCCGAACGTGTCACGT. Cells were seeded at 1 × 10 5 cells/well into 6-well plates 24 h prior to transfection. Transfection was carried out using lentiviral particles (Pan-1 MOI = 10; BxPC-3 MOI = 20), polybrene (5 μg/ml) and ENi.S according to the manufacturer's protocol. Then, 12 h post-transfection, virus-containing medium was replaced with complete medium, and 96 h post-transfection, all cells were selected with puromycin (Merck, USA) at a final concentration of 5 μg/ml (Panc-1) or 4 μg/ml (BxPC-3) for 10 days. Cells were then maintained in 2.5 μg/ml (Panc-1) or 2 μg/ml (BxPC-3) puromycin. For the generation of stable transfected cells, media was changed three times a week. After 3 weeks, puromycin-resistant colonies were isolated for further study. The stable YAP-suppressed PCs and the Control PCs were named sh-YAP and sh-NC, respectively. The effect of gene silencing was evaluated by qRT-PCR and western blot.
Immunofluorescence
For fluorescent immunocytochemistry, the pancreatic cancer cells and pancreatic stellate cells were fixed for 20 min in 4% paraformaldehyde in PBS, and the endogenous peroxidase activity was quenched with 3% hydrogen peroxide. The specimens were permeabilized with 0.3% Triton X-100 in PBS for 15 min on ice, pre-blocked for 60 min with bovine serum albumin (BSA) at room temperature, and incubated with primary antibody overnight at 4°C. Staining was detected with the corresponding fluorescein-conjugated secondary antibodies (Jackson ImmunoResearch). Slides were mounted and examined using a Zeiss Instruments confocal microscope.
Colony formation assay
Cells (1000 shYAP or shNC cells) were seeded into a 35mm petri dish and allowed to adhere overnight. Cells were further cultured for 2 weeks to allow colonies to form. At the indicated time point, colonies were fixed with 4% paraformaldehyde, stained with 0.1% crystal violet solution, rinsed and then imaged. The colonies > 0.5 mm in diameter were counted using a microscope (Nikon Eclipse Ti-S, Japan) at a magnification of 400 × .
Cell viability assay
Cell viability was measured using an MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay (Biotime). The cells were seeded into 96-well plates at a density of 5000 cells per well and incubated overnight in 10% FBS medium. After incubation for 24, 48, and 72 h at 37°C and 5% CO 2 , the cell proliferation rate was determined. Briefly, 20 μl of MTT solution (5 mg/ml in distilled water) was added to each well, and the cells were incubated for 4 h at 37°C, after which the medium was removed. Then, 150 μl of DMSO was added, and the optical density (OD) was measured at 490 nm on a multifunctional microplate reader (POLARstar OPTIMA; BMG, Offenburg, Germany).
Real-time PCR assay
Total RNA was extracted using a Fastgen1000 RNA isolation system (Fastgen, Shanghai, China) according to the manufacturer's protocol. Total RNA was reversetranscribed into cDNA using a Prime Script RT reagent kit (TaKaRa, Dalian, China). Real-time PCR was used to quantitatively examine the expression of YAP, CTGF, Ecadherin, N-cadherin and Vimentin at the mRNA level. Real-time PCR was conducted according to a previous report [23]. The PCR primer sequences for YAP, CTGF, E-cadherin, N-cadherin, Vimentin and β-actin are shown in Additional file 1: Table S2. The expression level of each target gene was determined using β-actin as the normalization control. Relative gene expression was calculated using the 2 -ΔΔCt method [24].
Enzyme-linked immunosorbent assay (ELISA)
Cells were conditioned in serum-free medium for 48 h. The culture media were collected and centrifuged at 1500 rpm for 5 min to remove particles, and the supernatants were frozen at − 80°C until use. The production of TGF-β1 in the supernatants of PSCs was assessed by ELISA using a commercially available ELISA kit (R&D Systems, USA) according to the manufacturer's recommendations.
Cell invasion assay
Transwell chambers (pore size, 8.0 μm; Millipore, Billerica, USA) were coated with Matrigel (BD Biosciences, Oxford, UK). PCs were cultured in 6-well plates in medium containing 1% FBS for 24 h before drug treatment. PCs (200 μl, 5 × 10 4 ) suspended in DMEM containing 1% FBS were seeded in the top chamber, and 500 μl of medium containing 10% FBS was placed in the lower chamber as a chemoattractant. The Transwell chamber was incubated for 48 h. The invaded cells on the bottom surface of the filter were fixed with methanol and stained with crystal violet (Boster Biological Technology Ltd., Wuhan, China). Cell migration and invasion were determined by counting the stained cells under a light microscope in 10 randomly selected fields.
Western blot analysis
Total protein was extracted using RIPA Lysis Buffer (Beyotime, Guangzhou, China), and the protein concentration was determined using a BCA protein assay kit (Pierce, Rockford, USA) according to the manufacturer's instructions. Then, a western blot assay was performed as previously described. The primary antibodies used are listed in Additional file 1: Table S1. The protein expression was visualized by enhanced chemiluminescence (Millipore, USA). Images were captured using a ChemiDoc XRS imaging system (Bio-Rad, USA), and Quantity One image software was used for the densitometry analysis of each band; β-actin was used as an internal loading control.
Isolation and culture of human pancreatic stellate cells
Normal pancreatic tissues (1.0-1.5 g) obtained from patients undergoing a pancreatic partial resection for benign pancreatic conditions at the First Affiliated Hospital of Xi'an Jiaotong University were immediately collected in sterile ice-cold Hanks balanced salt solution (HBSS) containing 100 U/ml penicillin and 100 μg/ml streptomycin (Gibco). The histological diagnostic assessment of specimens was confirmed by pathologists. Human pancreatic stellate cells (PSCs) were isolated using a density gradient method as previously described. Isolated PSCs were maintained at 37°C with 5% CO 2 in DMEM/F12 (HyClone, Logan, USA) medium supplemented with 10% heat-inactivated fetal bovine serum (FBS) (HyClone) , 100 U/ml penicillin and 100 μg/ml streptomycin. PSCs were identified by oil red staining of intracellular fat droplets and immunofluorescence of α-smooth muscle actin (α-SMA). Cells cultured in the above medium conditions for 24 h were used in additional experiments.
PC-PSC co-culture models
After pancreatic cancer cells were cultured in media supplemented with 10% FBS and grown to 50% confluence, the media was changed to contain 1% FBS, 100 U/ ml penicillin and 100 μg/ml streptomycin. Two days later, cancer cell conditioned medium was collected, centrifuged and filtered prior to incubation with isolated PSCs as previously described [3], and the PSCs were incubated with the conditioned medium for up to 2 days. For the direct PC-PSC co-culture model, PCs and PSCs were proportionally mixed (cell proportion, 2:1) and seeded into 6-well plates.
In vivo tumor model
Mice were housed and maintained under specific pathogen-free conditions in facilities approved by the Animal Care and Use Committee guidelines of the Xi'an Jiaotong University, Shaanxi, China. Investigations were conducted in accordance with ethical standards, the Declaration of Helsinki and national and international guidelines and were approved by the authors' institutional review board. The mice were used according to institutional guidelines when they were 6 to 8 weeks of age. Cells were resuspended in a 1:1 (v/v) mixture of culture medium and Matrigel (BD Biosciences, San Jose, CA, USA), and 1 × 10 6 BxPC-3-shNC cells, 1 × 10 6 BxPC-3-shYAP cells, 0.8 × 10 6 BxPC-3-shNC cells + 2 × 10 5 PSCs, or 0.8 × 10 6 BxPC-3-shYAP cells + 2 × 10 5 PSCs were injected s.c. into the right flank of nude mice. A total of 5 mice per group were used. After 8 weeks, the animals were sacrificed, and the subcutaneous tumors were isolated. Tumors were fixed in formalin as soon as possible and embedded in paraffin. Tumor volume was calculated as (length/2) × (width^2). Tumor samples were analyzed using H&E staining. Representative images were taken of each tumor using a light microscope at × 400 magnification.
Statistical analysis
Statistical analysis was performed using the SPSS statistical software package (version 13.0). The significance of the patient specimen data was determined using Pearson's correlation coefficient or Fisher's exact test. The significance of the in vitro and in vivo data was determined using Student's t-test (2-tailed), Mann-Whitney test (2tailed), or one-way ANOVA. P < 0.05 was considered statistically significant.
YAP is overexpressed in pancreatic cancer tissues
To determine whether YAP is overexpressed at the protein level in human PDAC tissues, we examined YAP expression in five human pancreatic cancer tissues and the corresponding normal pancreatic tissue via western blot. The results show that the levels of YAP protein in pancreatic cancer tissues were significantly elevated compared with normal pancreatic tissues (Fig. 1A). To further confirm these results, pancreatic tissue sections from 72 patients identified as PDAC and 20 cases of normal pancreatic specimens were analyzed using immunohistochemistry (IHC). Intense IHC staining of YAP was detected in the cytoplasm and nucleus of cancer cells, but cells exhibited a predominantly nuclear localization pattern, whereas rare staining events were observed in normal pancreatic tissues (Fig. 1B). The statistics for YAP expression levels in different pancreatic tissue groups are shown in Table 1. Figure 1 shows representative pictures of negative (0; Fig. 1Bc), weak (1+; Fig. 1B d), moderate (2+; Fig. 1B e) and strong (3+; Fig. 1Bf ) YAP staining in pancreatic cancer. As shown in Fig. 1B, no (Fig. 1Ba) or moderate (Fig. 1Bb) YAP immunoreactivity was observed in normal pancreatic tissues. YAP expression was significantly increased in PDAC compared to normal pancreatic tissues (P < 0.001; Fig. 1C).
Notably, the χ2 analysis revealed that histologic markers of aggressive disease, including tumor-node-metastasis (TNM) stage (P = 0.010; Fig. 1D and E), and pM stage (P = 0.038) were significantly associated with YAP expression levels ( Table 1). LSL-Kras G12D/+ ; Pdx1-Cre (KC) and LSL-Kras G12D/+ ; Trp53 fl/+ ; Pdx1-Cre (KPC) mice, in which Pdx1 induces the expression of mutant Kras alone or together with mutant Trp53 in murine pancreatic epithelium, fully recapitulate the pathogenesis of human PDAC and are generally regarded as two of the best genetically engineered mouse models (GEMMs) for human PDAC. Next, we detected the YAP expression in KC and KPC mice, and we found that YAP protein abundance was also markedly greater in pancreatic tissue from KC mice that had early and late pancreatic intraepithelial neoplasia (PanIN) or KPC mice that had fully established PDAC compared with wild-type mice (Fig. 1F). These findings indicated that YAP plays critical roles in PDAC development and progression and may be a valuable biomarker for this disease.
Knockdown of YAP inhibits pancreatic cancer cell proliferation in vitro
To determine the effect of YAP on PDAC growth, we analyzed its expression in PDAC cell lines. YAP was highly expressed in AsPC-1, BxPC-3 and Panc-1 cells but expressed at relatively low levels in CFPAC-1 and SW-1990 cells (Fig. 2a). Immunofluorescence showed that YAP was located in the nucleus and the cytoplasm of the BxPC-3 and Panc-1 cells (Fig. 2b). Accordingly, we defined BxPC-3 and Panc-1 cells as samples with differential expression for further experiments. The lentiviral vector YAP-shRNA was used to suppress YAP expression in two pancreatic cancer cell lines, BxPC-3 and Panc-1. From qRT-PCR and western blot results, we found that the YAP gene was significantly knocked down after YAP-shRNA transfection ( Fig. 2c and d). At various time points (24,48, and 72 h), the proliferation rate of BxPC-3-shYAP and Panc-1-shYAP cells were determined with an MTT assay. The results show that YAP knockdown inhibited the proliferation of both pancreatic cancer cell lines ( Fig. 2e and f ). Next, we detected the effect of YAP on the clone formation capability of Panc-1 and BxPC-3 cells, and the colony formation ability of both cell lines was clearly decreased after knockdown of YAP expression ( Fig. 2g and h).
Knockdown of YAP inhibits pancreatic cancer cell invasion through inhibiting EMT
To elucidate the role of YAP in the invasive ability of pancreatic cancer cells, we knocked down YAP expression with the lentiviral vector YAP-shRNA in BxPC-3 cells and Panc-1 cells. Using Transwell chamber assays with Matrigel, a significant decrease in the invasion of shYAP cells was observed compared to shNC cells ( Fig. 3a and b). Furthermore, western blot verified that knockdown of YAP resulted in a marked decrease in the expression of N-cadherin and Vimentin and a significant increase in E-cadherin ( Fig. 3c and e), consistent with reversion to an epithelial phenotype. These observations were confirmed at the mRNA level using real-time PCR (Fig. 3d and f ). Together, these data suggested that YAP might participate in the invasion process of PDAC by regulating EMT phenotypes.
Overexpression of YAP is associated with desmoplastic reaction via activation of pancreatic stellate cells
Several studies have indicated that CTGF is a target of YAP. Consistent with our results, real-time PCR and western blot showed that CTGF was down-regulated after YAP knockdown ( Fig. 4a and b). The IHC results also showed a positive correlation between YAP and CTGF staining in pancreatic cancer (Fig. 4c). We noted that both pancreatic cancer cells and pancreatic satellite cells stained positive for YAP, and YAP could be found in both the nucleus and the cytoplasm (Fig. 4c). YAP expression was positively correlated with α-SMA expression, a marker for activated stellate cells, and this correlation was also present in the KPC PDAC tissues (Fig. 4c). Next, we further investigated the effects of YAP on the tumorstroma interactions between PCs and PSCs, and PC-PSC co-culture models were designed. The BxPC-3 has the highest YAP expression and this cell line was derived from the primary pancreatic cancer. So, we choose BxPc-3 for the further investigation of the interaction among YAP, PCs and PSCs. For the indirect co-culture model (Fig. 4d), PC supernatant was prepared after BxPC-3-shNC or BxPC-3-shYAP cells were cultured in conditioned medium (CM) for 24 h. Then, the conditioned medium mixture was added to the serum-free starvation-synchronized PSCs. After 48 h, PSC activation was determined by examining the α-SMA level using immunofluorescence labeling and western blot. As shown in Fig. 4g and h, the PSC activation level was reduced after incubation with BxPC-3-shYAP-CM compared with BxPC-3-shNC-CM, as revealed by α-SMA expression. Increased TGF-β1 synthesis and secretion is a hallmark of activated PSCs. The immunoblotting and ELISA results confirmed the inhibitory effect of BxPC-3-shYAP-CM on PSC activation ( Fig. 4f and g). To investigate the effects of YAP on the activation of PSCs in a PC-PSC co-culture system (Fig. 4e), immunofluorescence was used to visualize the CTGF and PSC activation in the co-culture system. As shown in Fig. 4i, the immunofluorescence results indicate that the PSC activation level was reduced in indirect co-culture conditions after YAP knockdown in BxPC-3 cells, as revealed by α-SMA staining. Surprisingly, CTGF was mainly present in the PCs and was reduced after YAP knockdown in BxPC-3 cells, as revealed by CTGF staining with α-SMA that marked PSCs. Taken together, these data indicated that down-regulation of YAP expression in PCs inhibited PSC activation in a co-culture system, and this may be associated with a decrease in CTGF.
YAP is a critical mediator of TGF-β1/SMAD2-induced EMT and cell invasion
PSC activation has been recognized as a major driving force in the tumor microenvironment that promotes pancreatic cancer progression, and TGF-β1 plays an c The representative IHC staining for YAP, CTGF and α-SMA in the NP and PDAC specimens, accompanied by strong YAP staining or with weak YAP staining and NP specimens from WT mouse and PDAC specimens from a KPC mouse model (bar, 100 μm). d A schematic diagram of indirect co-culture conditions. e A schematic diagram of direct co-culture conditions. f PSCs were treated with mixed conditioned medium from BxPC3-shNC or BxPC-3-shYAP cells for 48 h; then, the supernatant of PSCs was collected, and an ELISA was performed to evaluate the TGF-β1 level. g PSCs were treated with mixed conditioned medium from BxPC3-shNC or BxPC-3-shYAP cells for 48 h, and western blot assays were performed to evaluate the α-SMA and TGF-β1 levels. h PSCs were treated with mixed conditioned medium from BxPC3-shNC or BxPC-3-shYAP cells for 48 h, and cells were stained with an anti-α-SMA antibody (green) and counterstained with DAPI (blue) to identify nuclei. Representative images demonstrate that the α-SMA level of PSCs in conditioned medium from BxPC-3-shYAP cells was reduced significantly (bar, 50 μm). i PCs (BxPC-3-shYAP or BxPC3-shNC) and PSCs were cultured together for 48 h. Then, immunofluorescence analysis was performed to detect α-SMA (green) and CTGF (red) expression in the cells (bar, 100 μm). *P < 0.05 important role in tumor-stroma interactions. Therefore, we continued to elucidate whether TGF-β1 could reverse the YAP inhibition of pancreatic cancer invasion and EMT phenotypes. BxPC-3 cells were cultured for 48 h with or without TGF-β1. The results showed that the invasion ability ( Fig. 5a and b) and the mesenchymal-related gene (N-cadherin and Vimentin) expression in BxPC-3-shNC cells were significantly increased after treatment with 10 ng/ml TGF-β1. However, simultaneous knockdown of YAP using shRNA abolished TGF-β1-induced cell invasion ( Fig. 5a and b) and EMT (Fig. 5d). Studies have indicated that YAP can bind TGF-β1-activated SMAD complexes to Fig. 5 YAP is a critical mediator of TGF-β1/SMAD2-induced EMT and cell invasion. a The effect of TGF-β1 (10 ng/ml) on BxPC-3-shYAP and BxPC-3-shNC PC invasion capability was assessed using a Matrigel invasion assay. PCs with or without TGF-β1 (10 ng/ml) pretreatment for 48 h were seeded into Matrigel-coated invasion chambers. b The invasive cells were quantified by counting the number of cells in 10 random fields at 100 × magnification. c BxPC-3-shYAP and BxPC-3-shNC PCs were pretreated with or without TGF-β1 (10 ng/ml) for 6 h, and cells were stained with an anti-Smad2 antibody (red) and counterstained with DAPI (blue) to identify nuclei (bar, 50 μm). Representative images demonstrate that knockdown of YAP not only reduced TGF-β1-induced nuclear accumulation of Smad2 but also diminished whole-cell Smad2 levels. d BxPC-3-shYAP and BxPC-3-shNC PCs were pretreated with or without TGF-β1 (10 ng/ml) for 48 h, and then, western blot assays were performed to evaluate the expression of E-cadherin, N-cadherin, Vimentin and YAP at the protein level. e BxPC-3 cells were treated with TGF-β1 (10 ng/ml) for the indicated time (10 min, 30 min, 1 h, 3 h and 6 h), and western blot assays were performed to evaluate the expression of YAP and t-SMAD2 and the p-SMAD2 level. f BxPC-3-shYAP and BxPC-3-shNC PCs were pretreated with or without TGF-β1 (10 ng/ml) for 1 h, and then, western blot assays were performed to evaluate the p-SMAD2 and YAP level. g BxPC-3 cells were treated with TGF-β1 at the indicated concentration (1, 2, 5, 10 and 20 ng/ml) for 24 h, and western blot assays were performed to evaluate the protein expression of YAP. h BxPC-3 cells were pretreated with the specific TGF-β1 inhibitor SB431542 (10 μM) for 1 h and then treated with TGF-β1 (10 ng/ml) for 24 h, and western blot assays were performed to evaluate the protein expression of YAP. Column: mean; bar: SD; *P < 0.05; **P < 0.01 control SMAD localization and activity in a variety of cell types, including mammary epithelial cells [25] and breast cancer cells [26]. The results showed that simultaneous knockdown of YAP using shRNA did not affect the p-SMAD2 level (Fig. 5f). However, the immunofluorescence results showed that SMAD2 nuclear location was significantly decreased when YAP was knocked down (Fig. 5c), and it also abolished TGF-β1-induced SMAD2 nuclear localization (Fig. 5c). We also found that YAP was significantly up-regulated after treatment with TGF-β1 in a timedependent and dose-dependent manner ( Fig. 5e and g), and this up-regulation was reversed after treatment with the TGF-β1 receptor inhibitor SB431542 (Fig. 5h). Taken together, our observations indicate that YAP is a critical mediator of TGF-β1-induced tumorigenic events, including EMT and cell invasion.
Knockdown of YAP in pancreatic cancer cells suppresses the tumor growth and desmoplastic reaction in vivo
Based on the promising in vitro findings reported above, we next sought to test the role of YAP in pancreatic cancer cells on tumor progression and PSC activation in vivo. We established a subcutaneous pancreatic cancer xenograft model in nude mice through injection of PCs alone or PCs plus PSCs. The tumor volume was monitored, as shown in Fig. 6a, and we noted that BxPC-3-shYAP cells exhibited a notable reduction in tumor growth rates compared to BxPC-3-shNC cells. Moreover, co-injection of BxPC-3-shNC cells and PSCs resulted in a significant increase in tumor growth rates compared to BxPC-3-shNC cells alone. There was a significant induction of tumor growth when PSCs were co-injected with BxPC-3-shYAP cells in which the YAP expression was repressed ( Fig. 6a and b). At the same time, we observed different histologic structures, especially in the stromal component, in the tumor tissues. The results showed that there were more stromal components in the BxPC-3-shNC+PSC co-injection group compared to the BxPC-3-shYAP+PSC co-injection group. Consistent with the in vitro studies, the IHC results showed that the proliferation of cancer cells was inhibited in the BxPC-3-shYAP group, as there was less and weaker expression of PCNA in the BxPC-3-shYAP group compared with the BxPC-3-shNC group (Fig. 6c). Moreover, the tumor tissues from mice in the BxPC-3-shYAP+PSC co-injection group exhibited lower staining levels of the stromal marker α-SMA compared with the BxPC-3-shNC+PSC co-injection group (Fig. 6c). Taken together, our in vivo findings indicate that YAP is critical in tumor growth and the desmoplastic reaction.
Discussion
PDAC is a notoriously aggressive malignancy that responds poorly to most chemotherapeutic agents, and the major reason is the complex pancreatic cancer tumormicroenvironment, which contributes to tumor invasion and the chemotherapy-resistant phenotype of pancreatic cancer cells. Previous studies have demonstrated that YAP, which is the main effector of the Hippo pathway, is an attractive target of investigation in mammalian malignancies, and it has been found to play an important role in breast cancer [16], lung cancer [17], ovarian cancer [18], and liver cancer [19] development. In this study, we explored the role of YAP in the invasiveness and proliferation of pancreatic cancer cells and the activation of PSCs. Our study showed that YAP expression was highly upregulated in pancreatic cancer tissues compared to normal pancreatic tissues. Moreover, the YAP expression level was well correlated with the TNM stage of pancreatic cancer patients. Furthermore, our results showed that YAP expression was generally more intense in the nucleus than in the cytoplasm, indicating that YAP was in an active phenotype and likely promoting pancreatic cancer progression. Consistent with the previous studies [27,28], our study showed that the staining of YAP in normal pancreatic tissues is restricted to acinar cells and small ducts, subpopulations that likely represent the normal function of YAP in maintaining tissue homeostasis. Furthermore, YAP is also up-regulated in mouse PanIN and as well as in stellate cells associated with PDAC. These findings indicated that YAP may be involved in pancreatic tissue regeneration, and that deregulation of YAP may play a role in neoplastic transformation and stellate cell functions in PDAC.
YAP protein expression was high in pancreatic cancer cells, and the immunofluorescence results showed that YAP is located in the nucleus and the cytoplasm, but the nuclear expression was stronger than the cytoplasmic expression, which was consistent with a previous study [29]. We found that knockdown of YAP expression via shRNA lentivirus transfection inhibited the proliferation, colony formation and invasion ability of pancreatic cancer cells. EMT is described as a dynamic and reversible biological process, and increasing evidence suggests that EMT plays important roles in the progression of cancer and may provide a rationale for developing more effective cancer therapies [30]. The EMT program is characterized by Vimentin and N-cadherin expression and E-cadherin suppression, representing a highly invasive and mesenchymal phenotype [31]. Consistent with the previous studies [32,33], our results showed that knockdown of YAP inhibited pancreatic cancer cell invasion, and this was accompanied by a dramatic reduction in the Vimentin and N-cadherin mRNA and protein levels, whereas both Ecadherin mRNA and protein levels were notably increased. It is possible that YAP is a cofactor that interacts with other genes that mediate the EMT process and invasion.
Several studies have suggested that the tumor microenvironment plays a supportive role in PDAC progression [2,[34][35][36]. During cancer initiation and development, quiescent PSCs are transformed into an activated myofibroblast-like phenotype that is characterized by α-SMA expression and the production of excessive ECM proteins [37]. Increasing evidence suggests that activated PSCs create a stroma-rich and hypoxic microenvironment that facilitates PDAC tumor growth, metastatic spread, perineural invasion, and resistance to chemoradiotherapy [35,38,39]. Our IHC results showed that there is a positive correlation between YAP expression and the desmoplastic reaction in human pancreatic cancer tissues and a KPC mouse model. Activation of PSCs was inhibited when they were in a co-culture of PSCs and BxPC-3-shYAP PCs. The α-SMA and TGF-β1 expression in PSCs was downregulated, and this may be due to the reduction in CTGF, which is a target of YAP and plays important role in PSC activation. In vivo experiments revealed that mice injected with a mixed BxPC-3-shNC+PSC suspension grew larger tumors with extensive desmoplasia and exhibited an increased proliferation tendency compared with mice injected with BxPC-3-shNC cells alone or a mixed BxPC-3-shYAP+PSC suspension. With respect to a mechanism, knockdown of YAP decreases CTGF production and release from PCs, blocking paracrine-mediated PSC activation, and disrupts tumor-stroma interactions.
TGF-β1 is a versatile cytokine that regulates a variety of biological processes through Smad-dependent signaling [40]. TGF-β1 was the first inducer of EMT described in normal mammary epithelial cells, and it was shown to act by signaling through its receptor serine/threonine kinase complex [41]. Currently, it has been documented that cancer cells exhibit increased invasion and metastasis abilities in response to TGF-β1 in various cancer types [42,43]. Herein, our study showed that TGF-β1 could increase PC invasion and migration by inducing EMT, and this effect was reversed when YAP was knocked down. Western blot showed that knockdown of YAP did not affect the TGF-β1-induced p-SMAD2 level, but the immunofluorescence results showed that the TGF-β1-induced SMAD2 nuclear localization could be reversed when YAP was knocked down. More interestingly, we also found that YAP was significantly up-regulated after treatment with TGF-β1 in a time-dependent and dose-dependent manner, and this up-regulation was reversed after treatment with the TGF-β1 receptor inhibitor SB431542. Taken together, our observations indicate that YAP is a critical mediator of TGF-β1-induced tumorigenic events, including EMT, cell migration, and invasion. Knockdown of YAP disrupted tumor-stroma interactions via reduction of the TGF-β1 production by PSCs and may be the main underlying mechanism of these effects.
Conclusions
The current study demonstrates that YAP was upregulated in pancreatic cancers and plays an important role in tumor proliferation, migration and invasion in vitro by modulating EMT-related factors. Knockdown of YAP decreases CTGF production and release from PCs, blocking paracrine-mediated PSC activation and in turn disrupting TGF-β1-mediated tumor-stroma interactions. Thus, YAP may play an important role in EMT and represent a promising therapeutic target for preventing pancreatic cancer progression. In particular, the development of a YAP inhibitor may provide a new class of potent and selective anticancer agents.
Additional file
Additional file 1: Table S1. A list of the utilized primary antibodies. | 8,344 | 2018-03-27T00:00:00.000 | [
"Biology",
"Medicine"
] |
Hochschild homology, trace map and ζ -cycles
. In this paper we consider two spectral realizations of the zeros of the Riemann zeta function. The first one involves all non-trivial ( i
Introduction
In this paper we give a Hochschild homological interpretation of the zeros of the Riemann zeta function. The root of this result is in the recognition that the map pEf qpuq " u 1{2 ÿ ną0 f pnuq which is defined on a suitable subspace of the linear space of complex-valued even Schwartz functions on the real line, is a trace in Hochschild homology, if one brings in the construction the projection π : A Q Ñ QˆzA Q from the rational adèles to the adèle classes (see Section 3). In this paper, we shall consider two spectral realizations of the zeros of the Riemann zeta function. The first one involves all non-trivial (i.e. non-real) zeros and is expressed in terms of a Laplacian intimately related to the prolate wave operator (see Section 4). The second spectral realization is sharper inasmuch as it affects only the critical zeros. The main players are here the ζ-cycles introduced in [7], and the Scaling Site [6] as their parameter space, which encodes their stability by coverings. The ζ-cycles give the theoretical geometric explanation for the striking coincidence between the low lying spectrum of a perturbed spectral triple therein introduced (see [7]), and the low lying (critical) zeros of the Riemann zeta function. The definition of a ζ-cycle derives, as a byproduct, from scale-invariant Riemann sums for complex-valued functions on the real half-line r0, 8q with vanishing integral. For any µ P R ą1 , one implements the linear (composite) map Σ µ E : S ev 0 Ñ L 2 pC µ q from the Schwartz space S ev 0 of real valued even functions f on the real line, with f p0q " 0, and vanishing integral, to the Hilbert space L 2 pC µ q of square integrable functions on the circle C µ " R˚{µ Z of length L " log µ, where pΣ µ gqpuq :" ÿ kPZ gpµ k uq.
The map Σ µ commutes with the scaling action R˚Q λ Þ Ñ f pλ´1xq on functions, while E is invariant under a normalized scaling action on S ev 0 . In this set-up one has Definition. A ζ-cycle is a circle C of length L " log µ whose Hilbert space L 2 pCq contains Σ µ EpS ev 0 q as a non dense subspace. Next result is known (see [7] Theorem 6.4) Theorem 1.1. The following facts hold (i) The spectrum of the scaling action of R˚on the orthogonal space to Σ µ EpS ev 0 q in L 2 pC µ q is contained in the set of the imaginary parts of the zeros of the Riemann zeta function ζpzq on the critical line ℜpzq " 1 2 . (ii) Let s ą 0 be a real number such that ζp 1 2`i sq " 0. Then any circle C whose length is an integral multiple of 2π s is a ζ-cycle, and the spectrum of the action of R˚on pΣ µ EpS ev 0 qq K contains s. Theorem 1.1 states that for a countable and dense set of values of L P R ą0 , the Hilbert spaces HpLq :" pΣ µ EpS ev 0 qq K are non-trivial and, more importantly, that as L varies in that set, the spectrum of the scaling action of R˚on the family of the HpLq's is the set Z of imaginary parts of critical zeros of the Riemann zeta function. In fact, in view of the proven stability of ζ-cycles under coverings, the same element of Z occurs infinitely many times in the family of the HpLq's. This stability under coverings displays the Scaling Site S " r0, 8q¸Nˆas the natural parameter space for the ζ-cycles. In this paper, we show (see Section 5) that after organizing the family HpLq as a sheaf over S and using sheaf cohomology, one obtains a spectral realization of critical zeros of the Riemann zeta function. The key operation in the construction of the relevant arithmetic sheaf is given by the action of the multiplicative monoid Nˆon the sheaf of smooth sections of the bundle L 2 determined by the family of Hilbert spaces L 2 pC µ q, µ " exp L, as L varies in p0, 8q. For each n P Nˆthere is a canonical covering map C µ n Ñ C µ , where the action of n corresponds to the operation of sum on the preimage of a point in C µ under the covering. This action turns the (sub)sheaf of smooth sections vanishing at L " 0 into a sheaf L 2 over S . The family of subspaces Σ µ EpS ev 0 q Ă L 2 pC µ q generates a closed subsheaf ΣE Ă L 2 and one then considers the cohomology of the related quotient sheaf L 2 {ΣE. In view of the property of R˚-equivariance under scaling, this construction determines a spectral realization of critical zeros of the Riemann zeta function, also taking care of eventual multiplicities. Our main result is the following Theorem 1.2. The cohomology H 0 pS , L 2 {ΣEq endowed with the induced canonical action of R˚is isomorphic to the spectral realization of critical zeros of the Riemann zeta function, given by the action of R˚, via multiplication with λ is , on the quotient of the Schwartz space SpRq by the closure of the ideal generated by multiples of ζ`1 2`i s˘.
This paper is organized as follows. Section 2 recalls the main role played by the (image of the) map E in the study of the spectral realization of the critical zeros of the Riemann zeta function. In Section 3 we show the identification of the Hochschild homology HH 0 of the noncommutative space QˆzA Q with the coinvariants for the action of Qˆon the Schwartz algebra, using the (so-called) "wrong way" functoriality map π ! associated to the projection π : A Q Ñ QˆzA Q . We also stress the relevant fact that the Fourier transform on adèles becomes canonical after passing to HH 0 of the adèle class space of the rationals. The key Proposition 3.3 describes the invariant part of such HH 0 as the space of even Schwartz functions on the real line and identifies the trace map with the map E. Section 4 takes care of the two vanishing conditions implemented in the definition of E and introduces the operator ∆ " Hp1`Hq (H being the generator of the scaling action of Ro n SpRq ev ) playing the role of the Laplacian and intimately related to the prolate operator. Finally, Section 5 is the main technical section of this paper since it contains the proof of Theorem 1.2.
The map E and the zeros of the zeta function
The adèle class space of the rationals QˆzA Q is the natural geometric framework to understand the Riemann-Weil explicit formulas for L-functions as a trace formula [3]. The essence of this result lies mainly in the delicate computation of the principal values involved in the distributions appearing in the geometric (right-hand) side of the semi-local trace formula of op.cit. (see Theorem 4 for the notations) (later recast in the softer context of [13]). There is a rather simple analogy related to the spectral (left-hand) side of the explicit formulas for a global field K (see [4] Section 2 for the notations) which may help one to realize how the sum over the zeros of the zeta function appears. Here this relation is simply explained. Given a complex valued polynomial P pxq P Crxs, one may identify the set of its zeros as the spectrum of the endomorphism T of multiplication by the variable x computed in the quotient algebra Crxs{pP pxqq. It is well known that the matrix of T , in the basis of powers of x, is the companion matrix of P pxq. Furthermore, the trace of its powers, readily computed from the diagonal terms of powers of the companion matrix in terms of the coefficients of P pxq, gives the Newton-Girard formulae. 1 If one transposes this result to the case of the Riemann zeta function ζpsq, one sees that the multiplication 1 This is an efficient way to find the power sum of roots of P pxq without actually finding the roots explicitly. Newton's identities supply the calculation via a recurrence relation with known coefficients.
by P pxq is replaced here with the map while the role of T (the multiplication by the variable) is played by the scaling operator uB u . These statements may become more evident if one brings in the Fourier transform. Indeed, let f P SpRq ev be an even Schwartz function and let wpf qpuq " u 1{2 f puq be the unitary identification of f with a function in L 2 pR˚, d˚uq, where d˚u :" du{u denotes the Haar measure. Then, by composing w with the (multiplicative) Fourier transform F : The function ψpzq is holomorphic in the complex half-plane H " tz P C | ℑpzq ą 1 2 u since f puq " Opu´N q for u Ñ 8. Moreover, for n P N, one has ż In the region ℑpzq ą 1 2 one derives, by applying Fubini theorem, the following equality ż Thus, for all z P C with ℑpzq ą 1 2 one obtains zqψpzq.
If one assumes now that the Schwartz function f fulfills ş R f pxqdx " 0, then ψp i 2 q " 0. Both sides of (2.2) are holomorphic functions in H: for the integral on the left-hand side, this can be seen by using the estimate Epf qpuq " Opu 1{2 q that follows from the Poisson formula. This proves that (2.2) continues to hold also in the complex half-plane H. Thus one sees that the zeros of ζp 1 2´i zq in the strip |ℑpzq| ă 1 2 are the common zeros of all functions FpEpf qqpzq, One may eventually select the even Schwartz function f pxq " e´π x 2`2 πx 2´1˘t o produce a specific instance where the zeros of FpEpf qq are exactly the non-trivial zeros of ζp 1 2´i zq, since in this case ψpzq " 1 4 π´1 4`i z 2 p´1´2izqΓ`1 4´i z 2˘.
Geometric interpretation
In this section we continue the study of the map E with the goal to achieve a geometric understanding of it. This is obtained by bringing in the construction the adèle class space of the rationals, whose role is that to grant for the replacement, in (2.1), of the summation over the monoid Nˆwith the summation over the group Qˆ. Then, up to the factor u 1{2 , E is understood as the composite ι˚˝π ! , where the map ι : QˆzAQ{ẐˆÑ QˆzA Q {Ẑˆis the inclusion of idèle classes in adèle classes and π : A Q {ẐˆÑ QˆzA Q {Ẑˆis induced by the projection A Q Ñ QˆzA Q . We shall discuss the following diagram The conceptual understanding of the map π ! uses Hochschild homology of noncommutative algebras. We recall that the space of adèle classes i.e. the quotient QˆzA Q is encoded algebraically by the cross-product algebra The Schwartz space SpA Q q is acted upon by (automorphisms of) Qˆcorresponding to the scaling action of Qˆon rational adèles. An element of A is written symbolically as a finite sum ÿ apqqU pqq, apqq P SpA Q q.
From the inclusion of algebras SpA Q q Ă SpA Q q¸Qˆ" A one derives a corresponding morphism of Hochschild homologies π ! : HHpSpA Q qq ÝÑ HHpAq.
Here, we use the shorthand notation HHpAq :" HHpA, Aq for the Hochschild homology of an algebra A with coefficients in the bimodule A. In noncommutative geometry, the vector space of differential forms of degree k is replaced by the Hochschild homology HH k pAq. If the algebra A is commutative and for k " 0, HH 0 pAq " A, so that 0-forms are identified with functions. Indeed, the Hochschild boundary map is identically zero when the algebra A is commutative. This result does not hold when A " A, since A " SpA Q q¸Qˆis no longer commutative. It is therefore meaningful to bring in the following Proposition 3.1. The kernel of π ! : HH 0 pSpA Q qq Ñ HH 0 pAq is the C-linear span E of functions f´f q , with f P SpA Q q, q P Qˆ, and where we set f q pxq :" f pqxq.
Proof. For any f, g P SpA Q q and q P Qˆone has x :" f U pq´1q, y :" U pqqg.
One knows ( [14] Lemma 1) that any function f P SpRq is a product of two elements of SpRq. Moreover, an element of the Bruhat-Schwartz space SpA Q q is a finite linear combination of functions of the form e b f , with e 2 " e. Thus any f P SpA Q q can be written as a finite sum of products of two elements of SpA Q q, so that (3.1) entails f´f q P ker π ! . Conversely, let f P ker π ! . Then there exists a finite number of pairs P´ÿ apqqU pqq¯:" ap1q.
We shall prove that for any pair x, y P A one has P prx, ysq P E. Indeed, one has This projection belongs to E in view of the fact that This completes the proof.
Proposition 3.1 shows that the image of π ! : HH 0 pSpA Q qq Ñ HH 0 pAq is the space of coinvariants for the action of Qˆon SpA Q q, i.e. the quotient of SpA Q q by the subspace E. An important point now to remember is that the Fourier transform becomes canonically defined on the above quotient. Indeed, the definition of the Fourier transform on adèles depends on the choice of a non-trivial character α on the additive, locally compact group A Q , which is trivial on the subgroup Q Ă A Q . It is defined as follows The space of characters of the compact group G " A Q {Q is one dimensional as a Qvector space, thus any non-trivial character α as above is of the form βpxq " αpqxq, so that Therefore, the difference F β´Fα vanishes on the quotient of SpA Q q by E and this latter space is preserved by F α since F α pf q q " F α pf q q´1 .
3.1. HH, Morita invariance and the trace map. Let us recall that given an algebra A, the trace map induces an isomorphism in degree zero Hochschild homology which extends to higher degrees. If A is a convolution algebra of theétale groupoid of an equivalence relation R with countable orbits on a space Y , and π : Y Ñ Y {R is the quotient map, the trace map takes the following form The trace induces a map on HH 0 of the function algebras, provided one takes care of the convergence issue when the size of equivalence classes is infinite. If the relation R is associated with the orbits of the free action of a discrete group Γ on a locally compact space Y , the convolution algebra is the cross product of the algebra of functions on Y by the discrete group Γ. In this case, theétale groupoid is Y¸Γ, where the source and range maps are given resp. by spy, gq " y and rpy, gq " gy. The elements of the convolution algebra are functions f py, gq on Y¸Γ. The diagonal terms in (3.2) correspond to the elements of Y¸Γ such that spy, gq " rpy, gq, meaning that g " 1 is the neutral element of Γ, since the action of Γ is assumed to be free. Then, the trace map is This sum is meaningful on the space of the proper orbits of Γ. For a lift ρpxq P Y , with πpρpxqq " x the trace reads as In the case of Y " A Q acted upon by Γ " Qˆ, the proper orbits are parameterized by the idèle classes and this space embeds in the adèle classes by means of the inclusion ι : QˆzAQ Ñ QˆzA Q .
We identify the idèle class group C Q " QˆzAQ withẐˆˆR˚, using the canonical exact sequence affected by the modulus There is a natural section ρ : C Q Ñ AQ of the quotient map, given by the canonical inclusionẐˆˆR˚Ă A f QˆR " A Q . Next, we focus on theẐˆ-invariant part of SpA Q q. Then, with the notations of Proposition 3.1 we have Lemma 3.2. The following facts hold Trpf qpuq " 2 ÿ nPNˆf pnuq @u P R˚.
Proof. (i) By definition, the elements of the Bruhat-Schwartz space SpA Q q are finite linear combinations of functions on A Q of the form (S Q 8 is a finite set of places) where SpQ p q denotes the space of locally constant functions with compact support. An element of SpQ p q which is Zp -invariant is a finite linear combination of characteristic functions p1 Zp q p n pxq :" 1 Zp pp n xq. Thus an element h P SpA Q qẐˆis a finite linear combination of functions of the form With q " ś p´n p one has with ℓpxq :" f pqxq, ℓ " 1Ẑ b g, ℓ´f P EẐˆand the replacement of g with its even part 1 2 pgpxq`gp´xqq does not change the class of f modulo EẐˆ.
By Proposition 3.1 the Hochschild class in HH 0 pAq off is zero, thus Trpf q " 0. It follows from (3.4) that Epf qpuq " 0 @u P R˚. Then (2.2) implies that the function ψpzq " ş R˚f puqu 1 2´i z d˚u is well defined in the half-plane ℑpzq ą 1 2 where it vanishes identically, thus f " 0. The converse of the statement is obvious.
The next statement complements Proposition 3.1, with a description of the range of π ! : HH 0 pSpA Q qẐˆq Ñ HH 0 pAqẐˆ; it also shows that the map Epf qpuq " u 1{2 ř 8 n"1 f pnuq coincides, up to the factor u 1{2 2 , with the trace map (3.5). We keep the notations of Lemma 3.2 isomorphism, this means that π !´H H 0 pSpA Q qẐˆq¯is determined by the images of the elements of the subalgebra 1Ẑ b SpRq ev Ă SpA Q qẐˆ. Furthermore, one has the identity Proof. The first statement follows from Lemma 3.2 (i) and (iii). The second statement from (ii) of the same lemma.
The Laplacian ∆ " Hp1`Hq
This section describes the spectral interpretation of the squares of non-trivial zeros of the Riemann zeta function in terms of a suitable Laplacian. It also shows the relation between this Laplacian and the prolate wave operator.
4.1. The vanishing conditions. One starts with the exact sequence By implementing in the above sequence the evaluation δ 0 pf q :" f p0q, one obtains the exact sequence The next lemma shows that both SpA Q q 0 and SpA Q q 1 have a description in terms of the ranges of two related differential operators. For simplicity of exposition, we restrict our discussion to theẐˆ-invariant parts of these function spaces. Proof. (i) follows since GL 1 pA Q q is abelian, thus H commutes with the action of GL 1 pA Q q. Similarly Hf`f " 0 implies that xf pxq is constant and hence f " 0 for f P SpRq. Thus Hp1`Hq : SpRq Ñ SpRq 0 is injective. Let now f P SpRq ev with f p0q " 0. Then the function gpxq :" f pxq{x, gp0q :" 0, is smooth, g P SpRq odd and there exists a unique h P SpRq ev such that B x h " g. One has Hh " f so that p´1´Hq p h " p f . Thus if p f p0q " 0 one has p hp0q " 0 and there exists k P SpRq ev with Hk " p h. Then´p1`Hq p k " h and Hp1`Hq p k "´f . This shows that Hp1`Hq : SpRq ev Ñ SpRq ev 0 is surjective and an isomorphism.
The
Laplacian ∆ " Hp1`Hq and its spectrum. This section is based on the following heuristic dictionary suggesting a parallel between some classical notions in Hodge theory on the left-hand side, and their counterparts in noncommutative geometry, for the adèle class space of the rationals. The notations are inclusive of those of Section 3 Algebra of functions Cross-product by QD ifferential forms Hochschild homology Star operator ‹ ιˆF Differential d Operator H δ :" ‹d‹ Operator 1`H Laplacian ∆ :" Hp1`Hq Next Proposition is a variant of the spectral realization in [8,9].
Proposition 4.2. The following facts hold (i) The trace map Tr commutes with ∆ " Hp1`Hq and the range of Tr˝∆ is contained in the strong Schwartz space S pR˚q :" X βPR µ β SpR˚q, with µ denoting the Modulus.
(ii) The spectrum of ∆ on the quotient of S pR˚q by the closure of the range of Tr˝∆ is the set (counted with possible multiplicities) "´z´1 Proof. (i) The trace map of (3.5) commutes with ∆. By Lemma 4.1 (iii) the range of ∆ is SpRq ev 0 thus the range of E˝pHp1`Hqq is contained in S pR˚q (see [9], Lemma 2.51).
(ii) By construction, S pR˚q is the intersection, indexed by compact intervals J Ă R, of the spaces X βPJ µ β SpR˚q. The Fourier transform f ΠpN q " f @f P SpIq .
This direct sum decomposition commutes with ∆ since both ΠpN q and the conjugate of ∆ by the Fourier transform F are given by multiplication operators. The conjugate of H by F is the multiplication by´z, so that the conjugate of ∆ is the multiplication by´zp1´zq. The spectrum of ∆ is the union of the spectra of the finite-dimensional operators ∆ N :" ΠpN q∆ " ∆ΠpN q. By [9], Corollary 4.118, and the proof of Theorem 4.116, the finite-dimensional range of ΠpN q is described by the evaluation of f P SpIq on the zeros ρ P ZpN q of the Riemann zeta function which are inside the contour γ N , i.e. by the map where C pnρq denotes the space of dimension n ρ of jets of order equal to the order n ρ of the zero ρ of the zeta function. Moreover, the action of ∆ N is given by the matrix associated with the multiplication of f P SpIq by´zp1´zq: this gives a triangular matrix whose diagonal is given by n ρ terms all equal to´ρp1´ρq. Thus the spectrum of ∆ on the quotient of S pR˚q by the closure of the range of Tr˝∆ is the set (counted with multiplicities) "´ρ´1 Proof. This follows from Proposition 4.2 and the fact that for ρ P Ć Remark 4.4. The main interest of the above reformulation of the spectral realization of [8,9] in terms of the Laplacian ∆ is that the latter is intimately related to the prolate wave operator W λ that is shown in [10] to be self-adjoint and have, for λ " ? 2 the same UV spectrum as the Riemann zeta function. The relation between ∆ and W λ is that the latter is a perturbation of ∆ by a multiple of the Harmonic oscillator.
Sheaves on the Scaling Site and H 0 pS , L 2 {ΣEq
Let µ P R ą1 and Σ µ be the linear map on functions g : R˚Ñ C of sufficiently rapid decay at 0 and 8 defined by We shall denote with S ev 0 the linear space of real-valued, even Schwartz functions f P SpRq fulfilling the two conditions f p0q " 0 " ş R f pxqdx. The map (5.2) E : S ev 0 Ñ R, pEf qpuq " u 1{2 ÿ ną0 f pnuq is proportional to a Riemann sum for the integral of f . The following lemma on scale invariant Riemann sums justifies the pointwise "well-behavior" of (5.2) (see [7] Lemma 6.1) Lemma 5.1. Let f be a complex-valued function of bounded variation on p0, 8q. Assume that f is of rapid decay for u Ñ 8, Opu 2 q when u Ñ 0, and that ş 8 0 f ptqdt " 0. Then the following properties hold (i) The function pEf qpuq in (5.2) is well-defined pointwise, is Opu 1{2 q when u Ñ 0, and of rapid decay for u Ñ 8. (ii) Let g " Epf q, then the series (5.1) is geometrically convergent, and defines a bounded and measurable function on R˚{µ Z .
We recall that a sheaf over the Scaling Site S " r0, 8q¸Nˆis a sheaf of sets on r0, 8q (endowed with the euclidean topology) which is equivariant for the action of the multiplicative monoid Nˆ [6]. Since we work in characteristic zero we select as structure sheaf of S the Nˆ-equivariant sheaf O whose sections on an open set U Ă r0, 8q define the space of smooth, complex-valued functions on U . The next proposition introduces two relevant sheaves of O-modules.
Proposition 5.2. Let L P p0, 8q, µ " exp L, and C µ " R˚{µ Z . The following facts hold (i) As L varies in p0, 8q, the pointwise multiplicative Fourier transform defines an isomorphism between the family of Hilbert spaces L 2 pC µ q and the restriction to p0, 8q of the trivial vector bundle L 2 " r0, 8qˆℓ 2 pZq. (ii) The sheaf L 2 on r0, 8q is defined by associating to an open subset U Ă r0, 8q the space F pU q " C 8 0 pU, L 2 q of smooth sections vanishing at L " 0 of the vector bundle L 2 . The action of Nˆon L 2 is given, for n P Nˆand for any pair of opens U and U 1 of r0, 8q, with nU Ă U 1 , by (5.5) F pU, nq : C 8 0 pU 1 , L 2 q Ñ C 8 0 pU, L 2 q, F pU, nqpξqpxq " σ n pξpnxqq.
Note that with µ " exp x one has ξpnxq P L 2 pC µ n q and σ n pξpnxqq P L 2 pC µ q. By construction one has: σ n σ m " σ nm , thus the above action of Nˆturns L 2 into a sheaf on S " r0, 8q¸Nˆ.
(iii) By Lemma 5.1 (i), Epf qpuq is pointwise well-defined, it is Opu 1{2 q for u Ñ 0, and of rapid decay for u Ñ 8. By (ii) of the same lemma one has It then follows from [7] (see p6.4q which is valid for z " 2πn L P R) that Since f P S ev 0 , with wpf qpuq :" u 1{2 f puq, the multiplicative Fourier transform Fpwpf qq " ψ, ψpzq :" ş R˚f puqu 1 2´i z d˚u is holomorphic in the complex half-plane defined by ℑpzq ą´5{2 [7]. Moreover, by construction S ev 0 is stable under the operation f Þ Ñ uB u f`1 2 f , hence wpS ev 0 q is stable under f Þ Ñ uB u f . This operation multiplies Fpwpf qqpzq " ψpzq by iz. This argument shows that for any integer m ą 0, z m ψpzq is bounded in a strip around the real axis and hence that the derivative ψ pkq psq is Op|s|´mq on R, for any k ě 0. By applying classical estimates due to Lindelof [11], (see [1] inequality (56)), the derivatives ζ pmq p 1 2`i zq are Op|z| α q for any α ą 1{4. Thus all derivatives B m L of the function (5.6), now re-written as hpL, nq :" L´1 2 ζ`1 2´2 πin L˘ψ p 2πn L q, are sequences of rapid decay as functions of n P Z. It follows that ΣEpf q is a smooth (global) section of the vector bundle L 2 over p0, 8q. Moreover, when n ‰ 0 the function hpL, nq tends to 0 when L Ñ 0 and the same holds for all derivatives B m L hpL, nq. In fact, for any m, k ě 0, one has ÿ n‰0 |B m L hpL, nq| 2 " OpL k q when L Ñ 0.
This result is a consequence of the rapid decay at 8 of the derivatives of the function ψ, and the above estimate of ζpzq and its derivatives. For n " 0 one has hpL, 0q " L´1 2 ζp 1 2 qψp0q. (iv) For any open subset U Ă r0, 8q the vector space C 8 0 pU, L 2 q admits a natural Frechet topology with generating seminorms of the form (K Ă U compact subset) One obtains a space of smooth sections C 8 0 pU, ΣEq Ă C 8 0 pU, L 2 q defined as sums of products ř h j ΣEpf j q, with f j P S ev 0 and h j P C 8 0 pU, L 2 q. The map σ n : L 2 pC µ n q Ñ L 2 pC µ q in (ii) is continuous, and from the equality σ n˝Σµ n " Σ µ it follows (here we use the notations as in the proof of (ii)) that the sections ξ P C 8 pU 1 , L 2 pC 1 qq which belong to C 8 pU 1 , ΣEpS ev 0 qq are mapped by F pU, nq inside C 8 pU, ΣEpS ev 0 qq. In this way one obtains a sheaf ΣE Ă L 2 of O-modules over S .
(v) Let ξ P H 0 pU, ΣEq. By hypothesis, ξ is in the closure of C 8 0 pU, ΣEpS ev 0 qq Ă C 8 0 pU, L 2 q for the Frechet topology. The Fourier components of ξ define continuous maps in the Frechet topology, thus it follows from (5.6) that the functions f n " Fpξqpnq are in the closure, for the Frechet topology on C 8 0 pU, Cq, of C 8 0 pU, Cqg n , where g n pLq :" ζ`1 2´2 πin L˘i s a multiplier of C 8 0 pU, Cq. This conclusion holds thanks to the moderate growth of the Riemann zeta function and its derivatives on the critical line. Conversely, let ξ P C 8 0 pU, L 2 q be such that each of its Fourier components Fpξqpnq belongs to the closure for the Frechet topology of C 8 0 pU, Cq, of C 8 0 pU, Cqg n . Let ρ P C 8 c pr0, 8q, r0, 1sq defined to be identically equal to 1 on r0, 1s and with support inside r0, 2s. The functions α k pxq :" ρppkxq´1q (k ą 1) fulfill the following three properties (1) α k pxq " 0, @x ă p2kq´1, α k pxq " 1, @x ą k´1.
For all m ą 0 there exists C m ă 8 such that |x 2m B m x α k pxq| ď C m k´1 @x P r0, 1s, k ą 1.
To justify (3), note that x 2 B x f ppkxq´1q "´k´1f 1 ppkxq´1q and that the derivatives of ρ are bounded. Thus one has |px 2 B x q m α k pxq| ď }ρ pmq } 8 k´m @x P r0, 8q, k ą 1 which implies (3) by induction on m. Thus, when k Ñ 8 one has α k ξ Ñ ξ in the Frechet topology of C 8 0 pU, L 2 q. This is clear if 0 R U since then, on any compact subset K Ă U , all α k are identically equal to 1 for k ą pmin Kq´1. Assume now that 0 P U and let K " r0, ǫs Ă U . With the notation of (5.7) let us show that p pn,mq K ppα k´1 qξq Ñ 0 when k Ñ 8. Since α k pxq " 1, @x ą k´1 one has, using the finiteness of p L´n}pB m L ξqpLq} L 2 kÑ8 Ñ 0.
Then one obtains
L´nB m L ppα k´1 qξqpLq " L´nppα k´1 qB m L ξqpLq`m Thus using (3) above and the finiteness of the norms p pn`2j,m´jq K pξq one derives: p pn,mq K ppα k´1 qξq Ñ 0 when k Ñ 8. It remains to show that α k ξ belongs to the submodule C 8 0 pU, ΣEq. It is enough to show that for K Ă p0, 8q a compact subset with min K ą 0, one can approximate ξ by elements of C 8 0 pU, ΣEq for the norm p p0,mq K . Let P N be the orthogonal projection in L 2 pC µ q on the finite-dimensional subspace determined by the vanishing of all Fourier components Fpξqpℓq for any ℓ, |ℓ| ą N . Given L P K and ǫ ą 0 there exists N pL, ǫq ă 8 such that (5.8) }p1´P N qB j L ξpLq} ă ǫ @j ď m, N ě N pL, ǫq. The smoothness of ξ implies that there exists an open neighborhood V pL, ǫq of L such that (5.8) holds in V pL, ǫq. The compactness of K then shows that there exists a finite N K such that It now suffices to show that one can approximate P N ξ, for the norm p p0,mq K , by elements of C 8 0 pU, ΣEq. To achieve this result, we let L 0 P K and δ j P C 8 c pR˚q, |j| ď N be such that ż R˚u 1{2 δ j puqd˚u " 0 @j, |j| ď N.
One construct δ j starting with a function h P C 8 c pR˚q such that Fphq´2 πj L0¯‰ 0 and acting on h by a differential polynomial whose effect is to multiply Fphq by a polynomial vanishing on all 2πj 1 L0 , j 1 ‰ j and at i{2. By hypothesis each Fourier component Fpξqpnq belongs to the closure in C 8 0 pU, Cq of the multiples of the function ζ`1 2´2 πin L˘. Thus, given ǫ ą 0 one has functions f n P C 8 0 pU, Cq, |n| ď N such that πin L˙f n pLq˙| ď ǫ @j ď m, |n| ď N.
We now can find a small open neighborhood V of L 0 and functions φ j P C 8 pV q, |j| ď N such that This is possible because the determinant of the matrix M n,j pLq " Fpδ j q`2 πn L˘i s non-zero in a neighborhood of L 0 where M n,j pL 0 q is the identity matrix. The even functions d j puq on R, which agree with u´1 {2 δ j puq for u ą 0, are all in S ev 0 since ş R d j pxqdx " 2 ş R˚u 1{2 δ j puqd˚u " 0. One then has πn L˙ by (5.6), and by (5.9) one gets ÿ φ j pLqFpΣ µ pEpd j qqqpnq " ζˆ1 2´2 πin L˙f n pLq @L P V.
One finally covers K by finitely many such open sets V and use a partition of unity subordinated to this covering to obtain smooth functions ϕ ℓ P C 8 c p0, 8q, g ℓ P S ev 0 such that the Fourier component of index n, |n| ď N , of ř ϕ ℓ ΣEpg ℓ q is equal to ζ`1 2´2 πin L˘f n pLq on K. This shows that ξ belongs to the closure of We recall that the space of global sections H 0 pT , F q of a sheaf of sets F in a Grothendieck topos T is defined to be the set Hom T p1, F q, where 1 denotes the terminal object of T . For T " S and F a sheaf of sets on r0, 8q, 1 assigns to an open set U Ă r0, 8q the single element˚, on which Nˆacts as the identity. Thus, we understand an element of Hom S p1, F q as a global section ξ of F , where F is viewed as a sheaf on r0, 8q invariant under the action of Nˆ.
With the notations of Proposition 5.2 and for ξ P Hom S p1, L 2 q, we write p ξpL, nq :" Fpξqpnq for the (multiplicative) Fourier components of ξ. Then we have Proof. (i) Let ξ P Hom S p1, L 2 q: this is a global section ξ P C 8 0 pr0, 8q, L 2 q invariant under the action of Nˆ, i.e. such that σ n pξpnLqq " ξpLq for all pairs pL, nq. The Fourier components p ξpL, nq of any such section are smooth functions of L P r0, 8q vanishing at L " 0, for n ‰ 0, as well as all their derivatives. The equality σ n pξpLqq " ξpL{nq entails, for n ą 0, This shows that the p ξpL, nq are uniquely determined, for n ą 0 by the function p ξpL, 1q and, for n ă 0, by the function p ξpL,´1q. With gpLq " p ξpL, 0q one has: gpLq " n´1 2 gpL{nq for all n ą 0. This implies, since Q˚is dense in R˚and g is assumed to be smooth, that g is proportional to L´1 2 and hence identically 0, since it corresponds to a global section smooth at 0 P r0, 8q. This argument proves that γ is injective. Let us show that γ is also surjective. Given a pair of functions f˘P C 8 0 pr0, 8q, Cq we construct a global section ξ P H 0 pS , L 2 q such that γpξq " pf`, f´q. One defines ξpLq P L 2 pC µ q by by means of its Fourier components set to be p ξpL, 0q :" 0, and for n ‰ 0 by p ξpL, nq :" |n|´1 2 f signpnq pL{nq.
Since f˘pxq are of rapid decay for x Ñ 0, ř | p ξpL, nq| 2 ă 8, thus ξpLq P L 2 pC µ q. All derivatives of f˘pxq are also of rapid decay for x Ñ 0, thus all derivatives B k L pξpLqq belong to L 2 pC µ q and that the L 2 -norms }B k L pξpLqq} are of rapid decay for L Ñ 0. By construction σ n pξpLqq " ξpL{nq, which entails ξ P H 0 pS , L 2 q with γpξq " pf`, f´q.
(ii) Let ξ P H 0 pS , ΣEq. By Proposition 5.2 (v), the functions f˘" p ξpL,˘1q are in the closure, for the Frechet topology on C 8 0 pr0, 8q, Cq, of the ideal generated by the functions ζ`1 2¯2 πi L˘. Conversely, let ξ P H 0 pS , L 2 q and assume that γpξq is in the closed submodule generated by multiplication with ζ`1 2¯2 πi L˘. The Nˆ-invariance of ξ implies p ξpL, nq " |n|´1 2 p ξpL{|n|, signpnqq for n ‰ 0. Thus the Fourier components p ξpL, nq belong to the closure in C 8 0 pU, Cq of the multiples of the function ζ`1 2´2 πin L˘, then Proposition 5.2 (v) again implies ξ P H 0 pS , ΣEq. The action of R˚on the sheaf L 2 is given by the action ϑ on the Fourier components of its sections ξ. With µ " exp L, L P p0, 8q, n P N˚and λ P R˚, this is The following result explains in particular how the quotient sheaf L 2 {ΣE on S handles eventual multiplicities of critical zeros of the zeta function. Proof. We first show that the canonical map q : H 0 pS , L 2 q Ñ H 0 pS , L 2 {ΣEqq is surjective. Let ξ P H 0 pS , L 2 {ΣEqq: as a section of L 2 {ΣE on r0, 8q, there exists an open neighborhood V " r0, ǫq of 0 P r0, 8q and a section η P C 8 0 pV, L 2 q such that the class of η in C 8 0 pV, L 2 {ΣEq is the restriction of ξ to V . The Fourier components p ηpL, nq are meaningful for L P V . Since ξ is Nˆ-invariant, for any n P Nt he class of F pV {n, nqpηq, with F pV {n, nqpηqpLq :" σ n pηpnLqq (see (5.4)) is equal to the class of the restriction of η in C 8 0 pV {n, L 2 {ΣEq. We thus obtain ηpLq´F pV {n, nqpηq P C 8 0 pV {n, ΣEq Furthermore, the Fourier components of α " F pV {n, nqpηq are given by p αpL, kq " n 1 2 p ηpnL, nkq.
Next step is to extend the functions ηpL,˘1q P C 8 0 pV, Cq to f˘P C 8 0 pr0, 8q, Cq fulfilling the following property. For any open set U Ă r0, 8q and a section β P C 8 0 pU, L 2 q, with the class of β in C 8 0 pU, L 2 {ΣEqq being the restriction of ξ to U , the functions p βpL,˘1q´f˘pLq belong to the closure in C 8 0 pU, Cq of the multiples of the function ζ`1 2¯2 πi L˘. To construct f˘one considers the sheaf G˘which is the quotient of the sheaf of C 8 0 pr0, 8q, Cq functions by the closure of the ideal subsheaf generated by the multiples of the function ζ`1 2¯2 πi L˘. Since the latter is a module over the sheaf of C 8 functions, it is a fine sheaf, thus a global section of G˘can be lifted to a function. By Proposition 5.2 (v), the Fourier components p ξ j pL,˘1q of local sections ξ j of L 2 representing ξ define a global section of G˘. The functions f˘are obtained by lifting these sections. By appealing to Lemma 5.3, we let φ P H 0 pS , L 2 q to be the unique global section such that γpφq " pf`, f´q. Then we show that qpφq " ξ. We have already proven that the restrictions to V " r0, ǫq are the same. Thus it is enough to show that given L 0 ą 0 and a lift ξ 0 P C 8 0 pU, L 2 q of ξ in a small open interval U containing L 0 , the difference δ " φ´ξ 0 is a section of ΣE. Again by Proposition 5.2 (v), it suffices to show that the Fourier components p δpL, nq are in the closure of the ideal generated by multiples of ζ`1 2´2 πin L˘. The Nˆ-invariance of ξ shows that F pU {n, nqpξ 0 q (see (5.5)) is a lift of ξ in U {n. Thus by the defining properties of the functions f˘one has { F pU {n, nqpξ 0 qp˘1q´f˘P C 8 pU, Cqζ˘, for ζ˘pLq " ζˆ1 2¯2 πi L˙.
With a similar argument and using the invariance of φ under the action of F pU {n, nq, one obtains that p δpnq is in the closure of the ideal generated by the multiples of ζ`1 2´2 πin L˘.
This sequence is equivariant for the action (5.11) of ϑ of R˚on the bundle L 2 . For h P L 1 pR˚, d˚uq one has (5.13) { pϑphqξqpL, nq " Fphqˆ2 πn L˙p ξpL, nq.
Φ is well defined since all derivatives of Φ˘pf qpLq tend to 0 when L Ñ 0 (any function f P SpRq is of rapid decay as well as all its derivatives). The exact sequence (5.12), together with Lemma 5.3, then gives an induced isomorphism γ : H 0 pS , L 2 {ΣEqq » pC 8 0 q 2 {`C 8 0 ζ`ˆC 8 0 ζ´˘.
In turn, the map Φ induces a morphism Φ : SpRq{pSpRqζq Ñ pC 8 0 q 2 {`C 8 0 ζ`ˆC 8 0 ζ´˘. By (5.13) this morphism is equivariant for the action of R˚. The map Φ is not an isomorphism since elements of its range have finite limits at 8. However it is injective and its range contains all elements of pC 8 0 q 2 which have compact support. Since ζ˘pLq " ζ`1 2¯2 πi L˘t ends to a finite non-zero limit when L Ñ 0,Φ is an isomorphism.
Remark 5.5. By a Theorem of Whitney (see [12], Corollary 1.7), the closure of the ideal of multiples of ζ`1 2`i s˘in SpRq is the subspace of those f P SpRq which vanish of the same order as ζ at every (critical) zero s P Z. Thus if any such zero is a multiple zero of order m ą 1, one finds that the action of R˚on the global sections of the quotient sheaf L 2 {ΣE admits a non-trivial Jordan decomposition of the form ϑpλqξ " λ is pξ`N pλqξq, with N pλq m " 0 and p1`N puqqp1`N pvqq " 1`N puvq for all u, v P R˚. | 10,732.4 | 2022-07-09T00:00:00.000 | [
"Physics"
] |
Linkage disequilibrium network analysis (LDna) gives a global view of chromosomal inversions, local adaptation and geographic structure
Recent advances in sequencing allow population-genomic data to be generated for virtually any species. However, approaches to analyse such data lag behind the ability to generate it, particularly in nonmodel species. Linkage disequilibrium (LD, the nonrandom association of alleles from different loci) is a highly sensitive indicator of many evolutionary phenomena including chromosomal inversions, local adaptation and geographical structure. Here, we present linkage disequilibrium network analysis (LDna), which accesses information on LD shared between multiple loci genomewide. In LD networks, vertices represent loci, and connections between vertices represent the LD between them. We analysed such networks in two test cases: a new restriction-site-associated DNA sequence (RAD-seq) data set for Anopheles baimaii, a Southeast Asian malaria vector; and a well-characterized single nucleotide polymorphism (SNP) data set from 21 three-spined stickleback individuals. In each case, we readily identified five distinct LD network clusters (single-outlier clusters, SOCs), each comprising many loci connected by high LD. In A. baimaii, further population-genetic analyses supported the inference that each SOC corresponds to a large inversion, consistent with previous cytological studies. For sticklebacks, we inferred that each SOC was associated with a distinct evolutionary phenomenon: two chromosomal inversions, local adaptation, population-demographic history and geographic structure. LDna is thus a useful exploratory tool, able to give a global overview of LD associated with diverse evolutionary phenomena and identify loci potentially involved. LDna does not require a linkage map or reference genome, so it is applicable to any population-genomic data set, making it especially valuable for nonmodel species.
Introduction
Recent developments in next-generation sequencing (Davey et al. 2011Seeb et al. 2011) have opened up a new era of population genomics in nonmodel species, broadening the range of evolutionary and ecological questions that can be addressed (Andrew et al. 2013;Narum et al. 2013). A major aim in this field is to distinguish locus-specific effects (such as selection) from genomewide effects (such as population structure and demographic history). This is often achieved by identifying outlier loci in empirical distributions of populationgenetic statistics such as polymorphism and divergence (Gaggiotti et al. 2009;Fisher et al. 2011). Considering loci separately like this ignores potentially valuable information about alleles from multiple loci that may be nonrandomly associated with each other, that is be in linkage disequilibrium (LD; Hill & Robertson 1968;Barton 2011).
LD exists when combinations of alleles across loci deviate from well-mixed (statistical equilibrium) expectations (Barton et al. 2007). Thus, any evolutionary phenomenon that perturbs the system away from this equilibrium, such as population structure or selection, will leave a signature of LD in the genome. Once LD exists, any mechanism that modulates its decay (i.e. affects the rate of recombination), such as chromosomal rearrangements (Rieseberg 2001) or recombination cold/ hot spots (Maniatis 2002) will also leave its mark in patterns of LD. Most notably, inversions strongly restrict recombination in heterokaryotypes, in particular around the inversion break points (Noor & Bennett 2009). LD therefore has the potential to be informative about many important evolutionary phenomena that affect genomes (Ardlie et al. 2002;Slatkin 2008).
Many current methods to analyse genomewide multilocus LD require the genomic position of the loci to be known (International HapMap Consortium 2005;Voight et al. 2006;Falush et al. 2007;Kim et al. 2008;Kumasaka et al. 2010;Lawson et al. 2012;Koch et al. 2013;Ralph & Coop 2013) and are therefore limited to species with well-annotated reference genomes. This is unfortunate as the ability to gain information about LD associated with important evolutionary phenomena does not crucially depend on knowing where the loci come from in the genome. The focus on using genomic location means that while measures of LD may in principle be applied to loci across the genome, they are frequently only applied within chromosomes, or to specific subsets of chromosomes (e.g. the MHC locus). This loses information about LD among more widely scattered loci. To address these issues, we develop here a network-analytical approach to identifying groups of loci with high intragroup LD. It does not require knowledge of the physical position of loci in the genome and can be used for all loci from a population-genomic data set in a single analysis. Appropriate population-genetic analyses of the sets of loci identified by our approach may then reveal their involvement in evolutionary phenomena, enabling a novel global view of processes shaping the genome.
Here, we will use networks to refer to the combinations of vertices and edges which form the heart of mathematical graph theory. Network analyses have successfully been used to study a diverse range of complex biological processes (Mason & Verwoerd 2007;Foote et al. 2009;Knight & Pinney 2009;Marbach et al. 2010). A central theme in network analyses is to identify sets of vertices (clusters) that have more and/or stronger connections between their members than to the remainder of the network (Newman & Girvan 2004;Leskovec et al. 2009). In our network-analytical approach to LD, the vertices in a network represent loci and the edges between them represent LD. In this way, we will use all pairwise LD values among loci to gain an overall picture of LD within a given population-genomic data set.
Any evolutionary phenomena that result in elevated LD among multiple loci are expected to cause distinct clusters in LD networks. Some examples, such as inversions and selective sweeps, only affect localized genomic regions within single chromosomes. Others involve loci more widely spread in the genome, potentially spanning several chromosomes. These include epistatic (nonadditive) fitness interactions among loci and population admixture. Admixture LD can be natural, for example the recent rejoining of allopatrically diverged populations; or it can be artificial, for example where the study sample comprises individuals from two or more divergent populations. In both cases, drift or selection, acting independently in the ancestral or sampled populations respectively, will result in sets of loci sharing high LD, potentially scattered across the genome. When such different evolutionary phenomena responsible for LD co-occur and are sufficiently different from each other, that is do not affect the same individuals or loci in the same way, we expect each to generate a distinct cluster in an LD network.
To identify clusters of loci that share high LD within an LD network, we have developed linkage disequilibrium network analysis (LDna). We evaluate the LDna approach by applying it to two study systems exhibiting well-characterized evolutionary phenomena associated with elevated LD among multiple loci: inversions, local adaptation and geographic structure. The first of these is Anopheles baimaii, a mosquito which is a major malaria vector in Southeast Asia (Sinka et al. 2011;Sarma et al. 2012). Anopheles baimaii has a widespread distribution extending from northeast India, through Myanmar and into Thailand (Obsomer et al. 2012). Polytene chromosome studies have identified five large inversions, each on a different chromosomal arm (2L, 2R, 3L, 3R and the X-chromosome; Baimai et al. 1988a,b;Poopittayasataporn & Baimai 1995). These inversions are polymorphic within populations, occurring at varying frequencies across the distribution of this species (Baimai et al. 1988a,b;Poopittayasataporn & Baimai 1995). We thus predict that in a population-genomic data set from this species, LDna will identify distinct clusters of loci, each cluster corresponding to an inversion.
The second system is the well-studied three-spined stickleback (Gasterosteus aculeatus; Colosimo et al. 2005;Jones et al. 2012). In this species, we expect, in addition to three known inversions, local adaptation to marine and freshwater habitats and geographical structuring between the Atlantic and Pacific populations to be associated with LD signals among multiple loci. Populationgenomic data from this species will enable us to evaluate the extent to which LDna is able to detect distinct clusters associated with the simultaneous presence of different evolutionary phenomena.
Linkage disequilibrium network analysis (LDna) outline
An outline of LDna is given in Fig. 1. We start with a matrix of pairwise LD values (Fig. 1A). LD was measured as the squared pairwise correlation coefficient between loci, r 2 (Hill & Robertson 1968), calculated using the 'LD' function in the R package 'genetics' (Warnes et al. 2013). These LD values were treated as weights for edges that connect loci (vertices) in networks which were constructed using the R package 'igraph' (Csardi & Nepusz 2006). We generate a series of networks, each using the subset of pairwise LD values above a particular threshold. As LD threshold decreases, vertices become increasingly connected in clusters that grow and eventually merge to form a single fully connected network. This successive merging of clusters can be effectively visualized as a tree (Fig. 1B), where branches represent clusters and the joining of branches represents clusters and/or individual loci merging (i.e. become connected by at least one edge) at a particular LD threshold.
The change in LD when two clusters merge is measured by k (Fig. 1B,C). We calculated k for every cluster in the tree, defined as: ðx ib Àx ia Þ Â n ib , wherex ib is the median of all intracluster r 2 values for cluster i before merger; after mergerx ia is the median of intracluster r 2 values for those pairwise LD values involving at least one locus from the premerger cluster i; and n ib is the number of loci in cluster i before merger. High values of k indicate the merger of large clusters or strongly associated clusters, that is where intracluster pairwise LD values are high relative to intercluster LD values (Fig. 1B). Any k value exceeding the median by a multiple φ of the median absolute deviation and containing at least |E| min edges is designated an outlier cluster (Fig. 1C) The order in which clusters merge with decreasing threshold can be visualized as a tree where only one connection between clusters is required for clusters to be considered as merged. For each cluster in the tree, the change in median LD of all pairwise connections between loci in a cluster at merger is measured by k (see Materials and Methods). (C) All lambda values plotted in order of increasing value (Index). Clusters with exceptionally high values of k relative to the median across all the values in a tree (above the, user-controlled, dashed line) are considered as outliers. In (B) and (C), red colour highlights clusters that do not have any other outlier clusters nested within them (single-outlier clusters, SOCs), and blue highlights the outlier cluster that contains multiple SOCs (compound outlier cluster, COC).
The two parameters, φ and |E| min , allow the user to pick out both 'diffuse' and 'compact' clusters as outliers. A diffuse cluster can be made up of many moderately associated and moderately connected vertices, while a compact cluster has a few vertices with strong associations and/or high connectivity. The purpose of these parameters is to enable the identification of clusters representing sets of loci that bear distinct evolutionary genetic signals in the data. Approaches to parameter value choice are explored in Results and in Appendix S1 and S2 (Supporting Information; these are also included as tutorials for the R package 'LDna', see Data accessibility). From the outlier values identified, we wish to determine the subsets that correspond to discrete evolutionary phenomena. In practice, we observe that some outlier clusters are nested within others. We designate any 'tip' cluster with no other cluster nested within it as a singleoutlier cluster (SOC, coloured red in Fig. 1). Any other outlier we designate as a compound outlier cluster (COC, coloured blue in Fig. 1). The set of SOCs identified in this way represents mutually exclusive clusters, each containing unique loci that share high LD. We hypothesize that each SOC corresponds to a distinct evolutionary phenomenon acting in the population. If this is the case, COCs may contain information about the relationships among evolutionary phenomena. However, exploring the interpretation of COCs is beyond the scope of this study where we shall focus on testing the biological interpretation of SOCs.
Population-genetic interpretation of LDna analysis on simulated data
To illustrate how LDna may be applied to more realistic data, we created a data set simulated under a scenario of population structure using fastsimcoal2 Excoffier et al. 2013; see Appendix S3, Supporting Information, for detailed methods). This involved an ancestral population that split into three populations, each with effective population size of 1000 diploid individuals, 1000 generations ago ( Fig. 2A). These populations evolved through mutation, recombination and drift only, without selection or migration (see Appendix S3, Supporting Information, for details). LDna was applied to 25 diploid individuals from each final population.
Populations were pooled prior to calculating LD, thereby creating sample admixture LD. As expected for three equivalent populations, LDna identifies three SOCs at similar LD thresholds ( Fig. 2B,C). Analysis of these SOCs by PCA reveals that each SOC represents the genetic distinction of each population from the other two due to the unique trajectory of mutation and drift in each population (Fig. 2D). This pattern, in which the number of clusters corresponds to the number of comparisons among populations, can be seen for other numbers of simulated populations too (Appendix S3-Fig 2, Supporting Information). When we incorporate migration among populations into these simulations, the resulting recombination erases the LD clusters progressively with
Preparation of population-genomic data sets and genome mapping
The preparation of a restriction-site-associated DNA (RAD) population-genomic data set for A. baimaii and a three-spined stickleback SNP data set is described in Appendix S4 (Supporting Information). Note that when many SNPs come from the same RAD locus, they may themselves cause clustering in LDna, in particular when parameter settings for |E| min and φ are set to low values (see Appendix S2 for details). However, in practice, we found that most RAD loci contained a single SNP (see Results). The consensus sequences for each relevant RAD locus were mapped against the A. dirus reference genome using BLAT (Kent 2002) run with the default parameters, and a P-value threshold of 1910 -8 was used to identify significant hits. Second, we mapped all our linkage map RAD loci (as above) to the scaffolds from the first step and used these to anchor the scaffolds to the linkage maps. Sequences were aligned to the A. gambiae genome using the BLAST algorithm through https:// Fig. 6 and Appendix S1 and S2, Supporting Information, for details of parameter value selection. (C) A snapshot of a full network at an LD threshold value just above that at which any of the five SOCs merge. (D) Each SOC is shown at an LD threshold where it is joined by a single link to other loci, in decreasing order of threshold from left to right, top to bottom. For each of these mergers, we have indicated, in brackets after the COC name, which SOCs are nested within each COC. COCs are shown here but were not analysed further.
www.vector/base.org/blast with default settings except that the maximum E-value was set to 1910 -3 .
As a draft genome is only available for a close relative of A. baimaii (A. dirus; estimated divergence time from A. baimaii~1 Mya; Morgan et al. 2010), we also produced a linkage map for A. baimaii (described in Appendix S4, Supporting Information). Each relevant locus was mapped against the A. dirus reference genome using BLAT (Kent 2002) run with the default parameters. A P-value threshold of 1 9 10 À8 was used to identify significant hits, and scaffolds with positive hits were then anchored to the linkage map. Chromosomal rearrangements are very common in Diptera, but chromosome arms remain syntenic even between distantly related species (Bolshakov 2002). Therefore, we also mapped all relevant loci to the genome of A. gambiae (the closest wellannotated reference genome to A. baimaii) using BLAST (https://www.vectorbase.org/blast) with default settings except that the maximum E-value was set to 1 9 10 À3 .
Population-genetic structure
Principal component analysis (PCA) and discriminant analysis of principal components (DAPC) were implemented in the R package 'adegenet' (Jombart & Ahmed 2011). For PCA, first, allele frequencies were scaled and missing genotype data were replaced by the mean using function 'scaleGen', and the PCA was performed with function 'dudi.pca'. For DAPC, the number of genetically distinct groups (k) present was first identified by running the function 'find.clusters', in which the function 'kmeans' is run sequentially with increasing number of groups and the different clustering solutions compared using the Bayesian information criterion (BIC). The optimal numbers of clusters were inferred visually by inspecting how BIC decreased as the number of groups increased following guidelines in the documentation for Adegenet. All other basic population-genetic parameters were calculated with functions from Adegenet.
LDna reveals five clusters of high LD in Anopheles baimaii populations
There are five known polymorphic inversions in Anopheles baimaii (see Introduction). Due to the restricted recombination in heterokaryotypes, a polymorphic inversion partitions the genetic information (created by mutation, drift and/or a selective sweep) in that genomic region into two groups: the ancestral and the inverted. Consequently, each polymorphic inversion is expected to create strong admixture LD among the inversion loci. We therefore predict that any inversion for which different karyotypes (hetero-or homokaryotypes) have been sampled should give rise to a SOC in population-genomic data. To test this hypothesis, we generated and analysed a restriction-site-associated DNA (RAD) sequence data set from 224 wild-caught individuals of A. baimaii, sampled throughout its distribution range. Our RAD sequence data set comprised 3008 loci from 184 individuals sampled from 91 geographical sites (Fig. S1). As r 2 can only be calculated between biallelic loci, we extracted all such SNPs from each RAD locus with a minor allele frequency above 10%. The data set used for subsequent LDna analyses comprised 3828 SNPs (median number of SNPs per RAD = 1, range 1-36).
Application of LDna to the above data set resulted in the identification of five SOCs ( Fig. 3A; Table 1). These SOCs were named 638_0.56, 739_0.49, 840_0.43, 927_0.38 and 1128_0.27, where the numbers before and after the underscore indicate a unique cluster number and the highest LD threshold at which a SOC is present, respectively. Figure 3B shows that each SOC constitutes a clear outlier with respect to k. Figure 3C gives a snapshot of cluster formation at an LD threshold where all SOCs are visible although some are small. Figure 3D gives a network visualization of the successive merging of the SOCs.
Hypothesis that SOCs correspond to inversions in Anopheles baimaii
To determine which, if any, of the five SOCs identified above correspond to inversions, we applied conventional population-genetic approaches. Lack of recombination within inversion heterokaryotypes is expected to result in genetic divergence at loci within the rearrangement, particularly those near to inversion break points. If a SOC marks an inversion, we therefore expect to be able to identify three genetically distinct groupings corresponding to the two alternative homokaryotypes and the heterokaryotype. Further, we expect the heterokaryotypic genetic groups to be genetically intermediate to the two homokaryotype groupings and to display a strong excess of heterozygous genotypes.
Population-genetic analyses support the inversion hypothesis
Analysis of the non-SOC loci showed strong support for two genetically distinct groups ( Fig. S2 and Fig. 4A). This pattern serves as a null hypothesis to which population structure at the SOCs can be compared. Four SOCs (638_0.56, 739_0.49, 840_0.43 and 927_0.38) all differed from the non-SOC loci in having strong support for three genetically distinct groups ( Fig. S2 and Fig. 4A). For these SOCs, DAPC found that a large proportion of the variation between these groups (>99.5%) was explained by the first discriminant function. As a result, for these SOCs, one group is intermediate between the other two. These intermediate groupings all show a strong excess of heterozygotes as indicated by highly negative values of the inbreeding coefficient, F IS (Fig. 4B). In contrast, the distributions of F IS values for the other two groups are centred close to zero. These results are consistent with the inversion hypothesis such that groups 1 and 3 for these four SOCs represent alternative homokaryotypes and group 2 for each SOC represents heterokaryotypic individuals. SOC 1128_0.27 showed a different pattern to the four described above. While there were still three major groups (Fig. S2), the first discriminant function explained much less of the variation among groups (77%). Four groups better partitioned the variation in F IS and it is therefore shown in Fig. 4. Similar to groups 1 and 2 of the non-SOC loci, groups 3 and 4 have nonnegative F IS values (Fig. 4B). In contrast, group 2 shows negative F IS and is intermediate between group 1 and groups 3 and 4, consistent with group 2 being heterokaryotypic. We therefore hypothesize that SOC 1128_0.27 corresponds to a relatively rare inversion where group 1 is the lowfrequency homokaryotype and groups 3 and 4 are the high-frequency homokaryotype, detected as two groups for some other reason, for example due to geographical structuring.
Mapping locates inversions to different chromosomal arms
The hypothesis that the five SOCs identified in A. baimaii correspond to five large polymorphic inversions in this species (see Introduction) further predicts that all loci from a given SOC will map together to distinct but large genomic regions. We tested this using a linkage map for A. baimaii (Appendix S5 and Fig. 5). Loci from the above SOCs mapped to 17 different A. dirus scaffolds of which 15 could be anchored to the A. baimaii linkage map. There is broad colinearity between the linkage map and the scaffolds (Fig. 5). However, there may also be rearrangements between the species, suggested by the crossing of lines between the linkage map and scaffold in Fig. 5, particularly in the upper portion of linkage group II. Loci from each of the five SOCs mapped to between two and four unique scaffolds (Fig. 5). Each SOC maps to large but distinct genomic regions: two each on linkage groups I and II, respectively, and one on the X-chromosome (Fig. 5). Only one locus (1 of 46 in SOC 1128_0.27) mapped away from the other loci in its SOC. For each of the five SOCs, between 96% and 100% of all BLAST hits against the A. gambiae genome (n = 7-47 per SOC) place each SOC on a different chromosome arm. SOC loci could colocate to a genomic region for several reasons, for example recombination cold spots such as telomeres or centromeres following admixture. However, given the consistency with previous cytological data (see Introduction), the observation that the SOCs map to the five large chromosome arms adds further support to the population-genetic analyses above in favour of the inversion hypothesis.
Identification of SOCs is robust to parameter choice and data set size
Identification of the SOCs above by LDna depends on the particular data set and requires the choice of values for two key parameters: |E| min (the minimum number of edges required for a cluster to be considered) and φ (which controls when clusters are defined as outliers). To test the extent to which identification of the SOCs associated with inversions above depends on the choice of |E| min and φ, we repeated the above LDna analyses with a wide range of parameter value combinations. Details of the resulting SOC losses and gains are shown in Fig. 6A. Two of the SOCs (1128_0.27 and 840_0.43) were recovered from all of this parameter space. All the five SOCs associated with inversions and no alternative SOCs were recovered from a substantial region of parameter space (white area in Fig. 6A). Figure 6B shows trees resulting from particular combinations of parameter values. Tree 1 (where φ = 7 and |E| min = 20) serves as a reference point, corresponding to the tree used in the analyses above (Fig. 3A). There were three main reasons why a SOC in Tree 1 was not identified when using different parameter combinations. First, when |E| min is high, it can exceed the number of edges (|E|) for the cluster in question. For Tree 2, in Fig. 6B (φ = 7, |E| min = 70), SOC 739_0.49 is lost for this reason. Second, when φ was high, the associated k lim can exceed the k value of the SOC in question. For Tree 3 (φ = 10, |E| min = 20), SOC 638_0.56 is lost for this reason. Third, when φ was low, the identification of additional SOCs meant that a cluster appeared to be a compound of more than one outlier cluster (COC, see above). For instance, as shown in Tree 4, when φ = 5 (|E| min = 20), the additional identification of SOC 777_0.47 meant that SOC 927_0.38 was not identified. Conversely, gains of SOCs tend to occur at reduced values of both parameters (the green area in A). For instance, as shown in Tree 5 where |E| min = 5 and φ = 5, an additional small SOC was identified (390_0.79).
Only when both parameter values were reduced to very low levels, were many additional and potentially spurious SOCs gained (Tree 6). Thus, while it is important to note that changes in |E| min and φ can lead to different SOCs being identified, all the SOCs identified as corresponding to inversions were to a large extent robust to changes in these parameters. Identification of the SOCs above by LDna could also depend on size of the data set, as clusters of loci truly sharing high LD will have fewer representatives in a data set of reduced size. To explore the effect of data set size, we carried out LDna on subsamples of the A. baimaii RAD sequence data set. We compared each SOC identified in the subsampled data sets to the five SOCs corresponding to inversions, here denoted 'reference SOCs'. We subsampled at random without replacement 50% (n = 1914) or 25% (n = 957) of all the available SNPs from the full data set and analysed ten replicates each. The parameter values used were as follows: |E| min = 16 and φ = 3 for the 50% subsampled data sets; and |E| min = 14 and φ = 2 for the 25% subsampled data sets. These parameter values were chosen as they gave results similar to those obtained with the full data set. In particular, φ was kept low enough to avoid the identification of SOCs that included loci from more than one reference SOC. From the 50% subsampled Fig. 6 The effects of parameter choice on LDna. The two user-defined input parameters for LDna are φ, which controls when clusters are defined as outliers, and |E| min , the minimum number of edges required for a cluster to be considered as an outlier. (A) We used the results from the original LDna analyses (that identified five SOCs associated with inversions) as a reference point ①. With respect to this reference, we assessed how many of the SOCs were not identified (losses), and how many additional SOCs were identified (gains) by LDna. White indicates parameter space where results exactly matched the reference. In addition to the reference (Tree ①), (B) shows five examples of LDna results (Trees ②-⑥) at different combinations of φ and |E| min as indicated above the trees and in (A). data sets, we recovered SOCs corresponding to all five reference SOCs from all replicates (Fig. S3A, Supporting Information). With 25% subsampled data sets, LDna failed to identify all the SOCs corresponding to the reference SOCs in 6 of 10 replicates (denoted by pink circles in Fig. S3B, Supporting Information). In 2 of 10 replicates, SOCs not corresponding to any reference SOC were also recovered (denoted by red circles in Fig. S3B, Supporting Information). Smaller data set sizes can therefore reduce the ability of LDna to detect biologically relevant SOCs and, in some instances, lead to the detection of spurious SOCs. Nonetheless, as sequencing throughput is typically increasing, limited data set size seems unlikely to be a major impediment to the application of LDna.
LDna can identify loci associated with local adaptation and population-demographic history
We hypothesize that in addition to inversions, LDna can be used to detect SOCs resulting from geographical structuring and local adaptation. To test this, we applied LDna to the three-spined stickleback (Gasterosteus aculeatus) system in which geographical structuring and local adaptation have been well characterized (Jones et al. 2012). This data set comprises SNP data from 21 genomes from multiple pairs of two highly morphologically and genetically distinct ecotypes locally adapted to marine and freshwater environments, from Pacific and Atlantic populations. Three small inversions on chromosomes I, XI and XXI that differ in their frequencies between the two ecotypes have previously been identified from this data set (Jones et al. 2012). Thus, in addition to finding SOCs corresponding to these inversions, we predict that LDna will identify SOCs resulting from population structure (Atlantic vs. Pacific) and local adaptation (Saltwater vs. Freshwater).
We applied LDna to a high-quality subset of 5962 SNPs from the chromosomes with known inversions (I, XI and XXI). Exploring variation across parameter values (as demonstrated in Fig. 6 and in Appendix S1 and S2, Supporting Information) allowed us to identify five SOCs (494_0.82, 495_0.82, 496_0.82, 618_0.79 and 673_0.76; Table 1) corresponding to each of the large branches in Fig. 7A at |E| min = 10 and φ = 5.7. All loci from SOC 496_0.82 mapped to the chromosome I inversion, and all but four loci (4 of 41) from SOC 494_0.82 mapped to the chromosome XXI inversion. No SOCs mapped specifically to the known inversion on chromosome XI, probably because not all SNPs were used (see Appendix S4) and the inversion is small. In contrast, the three remaining SOCs contain loci widely distributed across all three chromosomes. One of these SOCs (495_0.82) contains loci across all three chromosomes in particularly high LD (>0.95, Fig. 7B). Consequently, we infer that two SOCs correspond to two of the three previously identified inversions and the three remaining SOCs correspond to LD clusters arising from other causes.
The association of each SOC with respect to population structure (Atlantic vs. Pacific) and local adaptation (marine vs. freshwater) was assessed by PCA. Three of the five SOCs (494_0.82, 495_0.82 and 496_0.82), including the two that correspond to inversions, broadly separate freshwater and marine ecotypes (blue vs. red in 8A). This is consistent with these SOCs comprising loci associated with adaption to freshwater or marine habitats. In the case of 495_0.82, the separation is specifically between freshwater Pacific individuals and all others. Loci from the remaining two SOCs (618_0.79 and 673_0.76) broadly separate individuals from Pacific and Atlantic populations (open vs. filled in Fig. 8A). Overall, these analyses reveal that LDna can identify LD clusters associated with at least three different (and sometimes overlapping) evolutionary phenomena: inversions, local adaptation and geographical population structure.
Discussion
Here, we have developed and used LDna to detect multiple linked and unlinked subsets of loci sharing high LD. Analyses of these subsets of loci using a range of population-genetic analyses then enabled us to infer how they are involved in different evolutionary phenomena: inversions, local adaptation and geographical structure. Below we discuss the empirical findings, before turning to the usefulness of LDna in the context of other methods available to study genomewide LD.
LDna and inversions
Through their effect on inhibiting recombination, inversions play an important role in evolution, particularly in local adaptation and speciation (Kirkpatrick & Barton 2006;Hoffmann & Rieseberg 2008;Lowry & Willis 2010). Traditionally, studying inversions required cytological studies (e.g. fluorescence in situ hybridization techniques; Tang et al. 2008), BAC-clone sequencing (Tang et al. 2008) and/or sequencing of full genomes (Corbett-Detig et al. 2012). These are laborious and/or expensive, particularly in nonmodel species. Here, we demonstrated that LDna, coupled with population-genetic analyses, can be used to identify loci putatively associated with inversions in both a timely and cost-effective manner, even without mapping information. Such inversions can be both large, as in Anopheles baimaii, and small, as in the sticklebacks. Further, if there are SNPs within SOCs that are fixed (or almost fixed) between the inversion karyotypes, these could potentially be used as inversion markers to facilitate large-scale studies of inversion polymorphism in natural populations. Thus, LDna opens up the possibility of studying inversion polymorphism, by relatively simple means, in any species for which a population-genomic data set can be generated.
LDna and local adaptation
In the original generation and analysis of the stickleback data set used here, Jones et al. (2012) used supervised approaches to identify a large number of genomic regions that were consistently associated with marinefreshwater divergence. In contrast, LDna allows an unsupervised approach to detect clusters of loci in high LD across the whole genome, from any source in a single analysis. Contrary to what might have been expected from the original study, we did not find a unique SOC that separated marine and freshwater individuals globally (i.e. regardless of which ocean they were sampled from). Instead, we found one SOC (495_0.82) associated with adaptation to freshwater in the Pacific only. It is thus possible that a large part of the divergence between marine and freshwater ecotypes observed in the original study is driven by differences specifically between the ecotypes in the Pacific. Such unexpected patterns may be difficult to detect by supervised approaches (in which groups between which differences are sought need to be defined a priori) including standard divergence-based outlier analyses. LDna, as an unsupervised approach, can therefore provide a more nuanced view of loci involved in complex adaptations.
There are several distinct subclusters visible within SOC 495_0.82 (Fig. 7A), comprised of a surprisingly large number of loci spread across all the three chromosomes analysed here (Fig. 7B). It is likely that only a few loci in SOC 495_0.82 are directly involved in local adaptation (either due to selection acting in parallel in different freshwater systems or epistatic fitness interactions; Hohenlohe et al. 2012). Instead, the large number of loci in this SOC likely result from divergence hitchhiking (Via 2011) coupled with the reduced effects of recombination due to geographical structuring. Loci within a SOC that are not physically colocated can provide good candidates for loci directly associated with parallel selection or epistatic fitness interactions. These include the individual loci in exceptionally high LD across chromosomes as indicated by clusters with a mix of loci from different chromosomes in Fig. 7B. The four loci in the SOC associated with the chromosome XXI inversion (494_0.82) that map outside it are good candidates. In particular, the one with the highest LD to the rest of the cluster falls within the predicted gene ENSG-ACT00000014703 on chromosome I, encoding a protein homologous to the dynein light chain, involved in intracellular vesicle transport. This gene is known to be significantly associated with marine-freshwater divergence (it has a colocated peak in the 'Marine-Freshwater Cluster Separation Score', one of 174 with a genomewide false discovery rate of P < 0.05; Jones et al. 2012).
LDna and geographical structure
We found two SOCs (618_0.79 and 673_0.76) associated with Atlantic-Pacific structuring in the sticklebacks. Closer examination of the allele frequencies at these loci (Fig. S4) shows highly contrasting patterns. For SOC 673_0.76, many loci that are heterozygous in the Pacific are homozygous in the Atlantic. This is consistent with a founder event following the spread of this species from the Pacific to the Atlantic (Colosimo et al. 2005), with the associated drift resulting in the loss of genetic diversity in the Atlantic population. In contrast, in SOC 618_0.79, the allele frequency differences are far more divergent between the oceans (F ST = 0.64 vs. 0.10 for SOC 673_0.76). In other words, this SOC comprises the most differentiated loci between the oceansthose that are either fixed or nearly fixed between them (Fig. S4). Interestingly, within 618_0.79, the PCA also identified some differentiation between freshwater and marine environments for Atlantic individuals (Fig. 8A) indicating that some of these loci may also be involved in marine-freshwater divergence, specifically within the Atlantic. Overall, this demonstrates that LDna can separate different evolutionary phenomena even when they are associated with the same historical separation event.
Approaches to the study of genomewide LD Typically, LD declines quickly over short physical distances in wild populations (Kim et al. 2007;Slate & Pemberton 2007;Gray et al. 2009). Despite this, LD can span large contiguous genomic regions within chromosomes, as has been well documented in humans (e.g. Conrad et al. 2006). Several methods have been developed to characterize and utilize this information on LD. These include the integrated haplotype score (iHS) test (Voight et al. 2006) and the cross-population extended haplotype homozygosity (XP-EHH) test (Sabeti et al. 2007) that detect extended haplotypes that indicate the action of natural selection. Other methods have accessed such information on haplotypes and correlated allele frequencies to increase the power to make inferences of population structure, admixture and demography (Falush et al. 2003;Lawson et al. 2012;Ralph & Coop 2013).
It is becoming increasingly clear that LD can also occur among noncontiguous regions of the genome, even between chromosomes, in many taxa including humans (Wilson & Goldstein 2000;Hohenlohe et al. 2012;Koch et al. 2013;Schumer et al. 2014). Approaches to understand cross-genome (rather than localized) patterns of LD tend to focus on pairwise comparisons between loci/ haplotype blocks. While the LDna approach also relies on a matrix of pairwise estimates of LD, its use of networks goes beyond pairwise comparisons to identify sets of loci sharing high LD. This potentially enables LDna to capture information about high-order LD within the genome.
Conclusions
The insights provided by LDna are possible in any population-genomic data set, but are likely to be particularly valuable for nonmodel species where a global view of the genomic architecture is otherwise difficult to gain. We were able not only to detect potentially unexpected signals of LD (such as those caused by inversions), but to partition loci into sets affected by different evolutionary phenomena. This gives confidence that LDna will also provide insights in other situations where a complex LD signal involving noncontiguous parts of the genome is expected (e.g. assortative mating, epistatic interactions among multiple loci and species introgression). LDna could also be used to separate clusters of loci in high LD with the purpose of removing 'outliers' prior to studies that require neutral markers, for example to estimate population structure and population history. This broad applicability is coupled with access to a global view of evolutionary phenomena affecting genomes and the possibility of reasoned partitioning of loci within them, without prior assumptions. Together, these features make LDna an excellent exploratory tool for any population-genomic data set. Appendix S1 An introduction to LDna: basics. Tutorial which gives an introduction to the R package 'LDna'. A continually updated version can be found at: https://github.com/petrikemppainen/LDna Appendix S2 An introduction to LDna: advanced. Tutorial which explains some of the more advanced features of LDna including suggestions on how to find appropriate values for the parameters φ and |E| min . A continually updated version is available from: https://github.com/petrikemppainen/LDna Appendix S3 Linkage disequilibrium network analysis (LDna) on simulated data Appendix S4 Anopheles baimaii RAD sequences data set and three-spined sticklebacks SNPs data set preparation | 9,072 | 2015-01-21T00:00:00.000 | [
"Biology",
"Computer Science"
] |
An Empirical Model of Aerodynamic Drag in Alpine Skiing †
This paper describes an empirical model of aerodynamic drag for a range of body positions commonly used in alpine skiing. In order to calculate the drag coefficient (CD), a method for calculating the frontal area of an alpine skier, inside a wind tunnel, was used with an uncertainty of 0.012 m2. The general model for aerodynamic drag was based on measurements from one alpine skier. To make the model applicable for athletes of different body sizes and shapes, an investigation of individual adjustments of the model was made, based on measurements of four alpine skiers. The results showed a variation of ±1.4% in the drag coefficient between the different subjects. The frontal area in a reference position was considered a suitable scaling variable. Validations showed an uncertainty of ±3% for the individually adjusted model.
Introduction
Alpine skiing is a highly competitive sport, where the winning margins often are in the order of a hundredth of a second.Good performance analysis tools are therefore important to get an understanding of where an athlete is gaining and losing time.The Norwegian ski federation (NSF) uses a differential global navigation satellite system (dGNSS) [1,2], to analyze the performance and calculate the trajectory of the skier.By calculating the derivative of the velocity vector, the system can also estimate the total instant breaking force acting on the skier.The breaking force is the sum of the aerodynamic drag force and the ski-snow friction force.The technology can however not determine how much of the breaking force is due to the ski-snow friction and how much is due to the aerodynamic drag force.The drag force can constitute as much as 80% of the total breaking force in the speed disciplines downhill and super G, and a better understanding of the drag force is therefore desirable.
Although it is known that the aerodynamic drag causes most of the breaking force in the speed disciplines, most of the research done in alpine skiing is done on ski-snow friction.Determination of the drag force is complex.It is determined by variables such as the relative velocity, the frontal area of the skier, the shape of the skier and the skier's suit and equipment.Many of these factors are continuously changing throughout a race.The factors frontal area, shape of the skier, the skier's suit and equipment are all compiled in the variable called the drag area (CDA).
Investigation of different ways to model the drag force on an alpine skier is previously carried out by M. Supej et al. [3] and F. Meyer et al. [4].With the help of a model of the drag area one should be able to determine both the drag force and then also the ski-snow friction force and thereby determine what is causing a time loss.A good understanding of how the drag is changing with respect to the position of different segments of a human body is also desirable in a race situation for an alpine skier.A skier with good knowledge of how the drag depends on the body position will have an advantage compared to others by always choosing the most aerodynamic position possible.The coaches could also use this knowledge when analyzing videos after a race.
The aim of this paper was to make a complete data base of drag area with respect to the position of different body segments, used to explain a complete range of body motion in alpine skiing.This data base was used to make a programmatic model that uses the angles between the different body segments as input to compute CDA.
Wind Tunnel Testing
The experiments were carried out in a wind tunnel at the Norwegian University of Science and Technology (NTNU).The test section of the wind tunnel is 1.8 m high, 2.7 m wide and 12.5 m long and uses a 220-kW centrifugal fan to produce wind speed up to 25 m/s.The drag force FD was measured with a Schenck six-component force balance and the wind speed with a pitot-probe mounted upstream in the wind tunnel.Alpine bindings were mounted directly to the force plate.A live video feed showing the side and rear view of the skier was projected on the wind tunnel floor in front of the skier.Guidelines were added in order to help the test subject keep a consistent position.
The velocity was set to approximately 20 m/s and the sampling time to 30 s, so the test subject should be able to maintain the same position throughout the sample time.For every position, three measurements were performed and the mean value was calculated.At the start and the end of each sample a picture was taken of the test subject.This was done to evaluate if the test subject had maintained the desired position and to estimate the frontal area.The angles of the knees, hips, arms and elbows were measured manually.Example pictures with measured angles are shown in Figure 1.The angles of the hip and the knee were considered dependent on each other and the measurements of the knee flexion and the hip flexion were made together.The knee angle and the hip angle were defined as 180° in an upright position and range down to 0° by flexion.The arm angle was defined as 90° with the arms straight out to the side and 0° with the arms along the torso.The elbows were defined as 180° when pointing to the side and 90° when pointing forward by elbow flexion.A reference position was chosen as an upright position with both arms to the sides.This corresponded to 150° in knee angle, 160° in hip angle, 90° in arm angle and 180° in elbow angle, shown in Figure 1.
Frontal Area Measurements
Pictures from a camera behind the test subject (Figure 2a) was used for the frontal area measurements.A setup with small lamps was used to illuminate the background, and create a sharp silhouette.The resulting image is shown in Figure 2b.The frontal area was calculated by counting black pixels from a binary image.A calibration factor, representing pixels per square meter, was set from measurements of two different cylinders with known area.A low threshold pixel value was set to ensure that none of the pixels on the test subject would turn up white.Some unwanted regions in the picture turned up black and were manually cut out of the picture afterwards.The region around the test subject's legs was also too dark and it was cut out of all pictures.The picture after cutting out the unwanted black regions is shown in Figure 2b.
The frontal area in the region around the test subject's legs was computed by manually marking the region in 20 different pictures.The average area from the 20 pictures was added to the frontal area of the picture.The uncertainty of the frontal area measurements was calculated to be ±0.012m 2 by using the root mean square error from nine pictures in the same position.
Blockage Correction
Doing experiments on an alpine skier in a closed wind tunnel, the test subject will take up some of the space in the cross section.The flow around the subject will act differently than outside in an alpine hill because of the walls in the wind tunnel.This error is called blockage error and it has to be taken into account when doing measurements in a closed wind tunnel [5].Maskell suggested the equation for estimating the wake blockage in a closed wind tunnel.Cdu is the uncorrected coefficient, Cdc is the fully corrected drag coefficient, A is the projected frontal area of the object, S is the area of the cross section of the wind tunnel and Ɵ is the blockage constant.The blockage constant is an empirical constant and it is determined by the base pressure coefficient and the aspect ratio estimated to be Ɵ = 2.58, by estimating a constant aspect ratio of a human body of 3 [6].Rearranging Equation (1) and inserting the values for Ɵ and S, the corrected drag coefficient becomes (2)
Model Description
The model was based on three different regression schemes made from the results from the hipknee motion, arm angle and elbow angle experiments.The model was based on percentage change in CDA from the reference position, the CDA value in the reference position was defined from a wind tunnel measurements.The input variables were the knee angle, hip angle, right and left arm angles and right and left elbow angles.The hip-knee scheme computed a percentage CDA, relative to the reference position defined.The arm and elbow angles were then used to compute the relative percentage change in CDA resulting from the right and left arm and elbow respectively.This was added or subtracted from the value computed by the hip-knee scheme.The arms and elbows were assumed independent of each other.Therefore, it was assumed that the right and left arm and elbow each contributed with half of the change in the CDA.The model can also easily be modified or expanded with new results or other input variables.
Hip-Knee Motion
The CDA value for the reference position (at 100%) for the test subject was measured to 0.412 m 2 , after taking account for the blockage correction by using Equation (2).Taking account for the blockage correction is here essential since the frontal area from the highest to the lowest position changes with 41.8%.The tendency of the results and the measured points are illustrated in Figure 3a.Based on these results, the slope in both the knee angle-direction (y-axis) and the hip angle-direction (x-axis) were assumed to be constant for hip angles greater than 90°.The same was found for hip angles smaller than 90°, but with a different slope.The regression model was therefore split in two linear parts, one for hip angles smaller than 90° and one for hip angles greater than 90°.The regression model is shown in Figure 3b.The change in frontal area from the defined reference position to the lowest position was 41.2% and the change in the drag coefficient CD was 23.1%.This means that the frontal area has the greatest contribution to the change in CDA.This can explain why the change in CDA is greater for the hip angles ≥90°.A big part of the frontal area of a human body is from the hip and up to the shoulders, and by decreasing the hip angle this area is effectively reduced.The coefficient of determination for the regression scheme was calculated to R 2 = 0.982.
Arm and Elbow Angle
For the arm and elbow scheme relative change in CDA was measured with three different knee and hip angles.The reference position of the arm was defined as 90° and the measurement and the regression scheme is shown in Figure 4a, the reference position of the elbow was defined as 180° and the result and the regression scheme is shown in Figure 4b.The measurements were done on two different test subjects.TS2 was the same test subject as for the hip-knee motion, TS1 was from a preliminary experiment.For the arm angle greater than 0° the frontal area is assumed constant and the changes are only due to CD.The changes in CDA for the elbow angle is proportional to the changes in the frontal area.The coefficient of determination was calculated to be R 2 = 0.953 for the arm scheme and R 2 = 0.993 for the elbow scheme.
Validation of the Model
To make the model universally applicable for athletes of different body sizes and shapes, measurements were done on four different test subjects (TS3, TS4, TS5 and TS6) all professional athletes from the Norwegian national team.The goal of this experiment was to determine how both CDA and CD are changing for different body shapes and sizes and to use this information to find an individual adjustment factor for the model.The results in the percentage change from the reference position down to the lowest position (the hockey position) showed only small variations between the test subjects.The frontal area was calculated in order to separate the two variables, CD and A. Mean values in both the reference position and the hockey position were calculated from the four test subjects in this experiment and the one test subject used earlier (TS2), and the results are shown as a percentage difference relative the mean value in Figure 5.An interesting result from this experiment was the small changes in CD for the different test subjects in the reference position.The difference in frontal area was for instance 17.1%, between TS2 and TS6, while the biggest difference from the mean value of the drag coefficient was only 1.4%.This was in the range of the uncertainty of the frontal area measurements, and therefore the drag coefficient was determined to be a constant value of CD = 0.725 in the reference position, the mean value of the test subjects.Based on these results it was assumed that only the projected frontal area in the reference position was needed as individual input into the general model.With a constant percentage change from the reference position to the hockey position, no adjustments had to be made for the slopes in the model, and a constant CD implied that the only variable that then had to be changed for individual adjustment was the frontal area in the reference position of the test subject.By using the model based on measurements made on TS2 and only changing the frontal area of the test subjects, the model was validated for the other four test subjects in three different positions and compared to experimental results, shown in Figure 6.As can be seen from Figure 6 all the experimental values lie in the range of ±3% of the values computed by the model.This should be in the range of the expected uncertainty of the experiments.In addition there is the uncertainty associated with the wind tunnel measurements in comparison to real outdoor conditions.
Conclusions
A database of CDA values for a range of body positions commonly experienced in alpine skiing has been made from wind tunnel measurements.From this an empirical model that calculates CDA based on angles between different body segments has been introduced with an uncertainty of ±3%.A new method for calculating the frontal area inside a wind tunnel producing results with an uncertainty of ±0.012 m 2 was used.The model was tested and validated on four different test subjects and showed that the percentage relative change in CDA from the highest to the lowest position was constant, and that CD was constant for different test subjects standing in the same position.The only parameter needed for personal adjustments of the model is the frontal area of the test subjects in the reference position.By changing this parameter only, the model retained an uncertainty of ±3%.
Figure 1 .
Figure 1.Test subject standing in the reference position.Picture (a) showing the side view and picture (b) the rear view.
Figure 2 .
Figure 2. Background illumination and the resulting binary picture for the frontal area measurements.(a) shows the original picture taken inside the wind tunnel and (b) the binary picture after cutting out unwanted black regions.
Figure 3 .
Figure 3. Percentage CDA relative to the reference position at different knee and hip angles measured on one test subject.The color bar shows the percentage CDA and the measured points from the experiment are presented with red dots.(a) Measurement results from the experiment and (b) the resulting regression model.
Figure 4 .
Figure 4. Percentage change of CDA values, relative to the reference position.TS2* represents measurements from the reference position of test subject 2 (TS2), TS2' represents measurements done with a high knee angle and a low hip angle on TS2, TS1 is a preliminary experiment in the lowest position for the test subject.For the arm angle −10° is defined as arms in front of the body.
Figure 5 .
Figure 5. Percentage difference of the frontal area and drag coefficient in reference and in hockey position, relative to the mean value, for five different test subjects.
Figure 6 .
Figure 6.Percentage difference between the measured and the modelled CDA in three different positions, with error bars from the measurements. | 3,721.2 | 2018-02-14T00:00:00.000 | [
"Engineering"
] |
Moderate and Severe Level of Food Insecurity Is Associated with High Calorie-Dense Food Consumption of Filipino Households
Food insecurity is often deeply rooted in poverty. Hence, accessibility and the quality of foods consumed may affect the dietary pattern. The study aims to assess the relationship between food insecurity and dietary consumption. This investigation analyzed the data from the 2015 Updating of Nutritional Nutrition Survey. The Household Food Insecurity Access Scale (HFIAS) was used to determine household food security status and the prevalence of food insecurity. Food weighing, food inventory, and food recall were the methods used to collect food consumption data of sampled households. The study revealed poor nutrient quality and a greater likelihood of inadequacy of nutrients among moderate and severe food insecure households. Mild, moderate, and severe levels of food insecurity were found to affect 12%, 32%, and 22% of the population, respectively. The test showed that both moderate and severe food insecure families have significantly lower mean consumption of meat, milk, and fats and oils in contrast to food secure households. In comparison with food secure households, moderate and severe food insecure households consume higher amounts of cereals and cereal products, rice, and vegetables. Moderate and severe food insecure households have higher consumption of total carbohydrates but have significantly lower average intake of vitamin A, riboflavin, niacin, and total fat related to food stable households. Moreover, the results of the multiple logistic regression revealed that food insecure households have a higher likelihood to be deficient in energy, protein, calcium, vitamin A, thiamin, riboflavin, niacin, and vitamin C intakes, but except for iron (p value <0.05). Indeed, household food insecurity was associated with the higher consumption of calorie-dense food among Filipino households. This explains a lower nutrient quality and a higher likelihood of inadequacy of nutrients among moderate and severe food insecure households.
Introduction
e Committee on World Food Security (CFS) stated that food security occurs when all people, throughout all times, have both physical and economic access to sufficient, safe, and nutritious meals to fulfill their dietary needs as well as their food preferences for them to live an active and healthy life [1]. Nutrition is a crucial human need, which is why lack of food will have major consequences, such as hunger, obesity, cancer, and poverty [2]. e UN FAO latest findings have shown that 9% of the world's population was severely food insecure, while 17% experienced moderate levels of food insecurity. Food insecurity impacts 26.4 percent or approximately 2 billion of the global population, especially both moderate and severe food insecurity levels [3]. In the Philippines in 2015, more than half of the Filipino families were suffering from moderate food insecurity (32%) and severe food insecurity (22%) [4].
Food insecurity may be associated with poor nutrition. Even so, the relationship between food insecurity and dietary patterns is not yet fully established considering the number of local studies regarding this matter. e amount, variation, or combination of different food items in a meal, as well as the frequency of consumption, is referred to as a dietary pattern [5]. Earlier studies have connected food insecurity with decreased consumption of healthy foods and poor dietary quality with specific reference to low fruit and vegetable consumption [6,7]. In a previous study, among children, child food insecurity is associated with a lower vegetable intake and a greater calorie, fat, sugar, and fiber consumption [8]. Food insecurity was also found to be associated with lower HEI levels and increased consumption of added sugars as well as empty calories in a 2003-2010 NHANES study [9].
With increasing interest in the significance of food insecurity as a health factor, the number of studies examining the link between food insecurity and dietary patterns has increased significantly in recent years but no Philippine data are generated for use locally by program planners. e purpose of this study is to evaluate the relationship between food insecurity with dietary patterns and food sources of households in the Philippines.
Research Design and Study Population.
e present study was derived from the 2015 Updating of National Nutrition Survey results which was carried out by the Food and Nutrition Research Institute of the Department of Science and Technology. is is a cross-sectional survey that utilized a stratified three-stage sampling approach to represent all 17 regions and 80 provinces across the country, with a coverage rate of 96.6 percentage in both urban areas and rural areas. e first stage involved choosing primary sampling units (PSUs), which were made up of one barangay (village) or a group of adjacent barangays with at least 500 households. Enumeration areas (EAs) were determined within each primary sampling unit as for the second stage. Each EA comprises 150 to 200 households situated in a contiguous area in a barangay. e third and final stage involved selecting households from the sampled EA. e selected households served as the ultimate sampling unit during this stage.
A total of 9,930 sampled households were selected for the study. However, about 262 homes were excluded due to missing data of the variables of interest on the database arriving at a sum of 9,668 households for the analysis in this study. e DOST-FNRI Institutional Ethics Research Committee (FIERC) authorized the data collection instruments and survey protocol utilized in this study (FIERC protocol code: FNRI-2015-006). All surveyed households signed an informed consent form before taking part in the study.
Household Dietary Consumption.
Researchers used a digital measuring scale (Sartorius AZ4101 Digital Dietary Balance) to weigh household food items. All food prepared and served to the household for the day was weighed before cooking or in its raw state. Plate squanders, given-out food, and leftover food were also weighed in order to determine the actual weight of the food consumed. Nonperishable items that could be used during the measuring day, such as coffee, sugar, salt, cooking oil, and various condiments, were weighed at the beginning and end of the day. Household food consumption was recorded in terms of kind and amount.
e researchers validated food weighing by weighing similar food items consumed by the household members outside the home. A 24-hour food recall was conducted among household members via face-to-face interview, wherein household members were asked to recall their food consumption. Most of the time, food recalled was in a cooked state. Other foods which were eaten raw were reported in their raw state. To determine the size of various food items consumed, devices such as wooden matchboxes, tablespoons, and plastic circles were utilized.
Before analysis, four steps of data validation and assessment of acquired data were used: (1) the dataset was modified and verified to guarantee accurate and high-quality survey data, and each food item has a matching food ID code based on the Philippines Food Composition Table (PhilFCT); (2) edited food item data were encoded in the Household Dietary Evaluation System (HDES), a computer system that translates food items to energy and nutrient consumption per household; (3) the HDES transformed all food weights into gross weight or "as purchased weight," which was formerly the standard unit of encoding food weights. e actual weight of food consumed each day was calculated as the gross food weight lesser than the combined weights of remaining and discharged food and plate wastes; (4) the energy and nutrient intakes of households were compared to the energy and nutrient requirements outlined in the Philippine Dietary Reference Intakes (PDRI). Energy consumption was contrasted to the Recommended Energy Intake (REI), whereas nutrient intake was matched to the estimated average requirement (EAR). e results were given as a proportion of families that did not achieve the recommended calorie intake.
To compute for the household food consumption, the raw intake of each food group/nutrient was divided by the consumption unit (CU). In this study, CU was calculated such that one member or a guest consumed all major meals for the whole day at home. However, it should be emphasized that per capita reporting of family food intake has several limitations because it does not account for age, gender, or physiological differences among household members.
Ten food groups were utilized in the research to explore the food pattern consumed in all families based on their level of food insecurity. All reported meals and drinks were assigned to one of the ten food categories (Table 1).
Household Food Security.
e Household Food Insecurity Access Scale (HFIAS), which is specifically a pretested questionnaire, was utilized in the present study to identify levels of food security among Filipino households. A licensed nutritionist-dietitian conducted the face-to-face interviews and administered the questionnaire to the study participants. e questions were based on the household's food intake during the previous month, followed by inquiries on how frequently the family unit encountered the circumstances. e HFIAS categorizes food insecurity into four levels: food secure, mild, moderate, and severe. Table 2 categorizes the types of food insecurity faced by households based on their frequency level. A food secure household does not encounter any of the circumstances or only has to worry about food on rare occasions. A family becomes slightly food insecure if it is occasionally or frequently concerned about food and/or is unable to consume preferred meals and/or rarely has to eat fewer diverse foods and/or to eat foods they dislike. A moderately food insecure household sacrifices food quality by eating a less varied diet and/or undesirable foods on a regular or irregular basis and begins to reduce the number of foods by reducing the meal portion or the number of meals on a regular or irregular basis, but it does not experience the three most severe conditions. A severely food insecure household often decreases the amount of food consumed and exhibits the three most severe symptoms (running out of food, going to sleep, being hungry, and not eating for the whole day). Any family experiencing any of the three severe situations is already classified as highly food insecure [10].
Food Consumption Score.
e food consumption score (FCS) is a frequency-weighted diet variety score based on a household's frequency of consuming various categories of food in the last seven days before survey administration.
e FCS was estimated based on the variety of family intake of nine food groups: major staples, vegetables, fruits, meat and fish, oils, sauces, sugar, milk, and pulses. ese were weighted by quality of nutrients that it adds to the diet multiplied by the frequency (number of days) of intake (Table 3) [11].
Households with a score of less than 28 are deemed to have inadequate food intake, with 28 and 42 scores were considered as borderline food consumption, while scores over 42 were judged to have adequate food consumption (Table 4).
Socioeconomic and Demographic
Data. Data on family economic status (wealth status), household size, place of household residence, sex of the household head, educational level and occupation level of the family head, and other household profiles were collected in this survey. e wealth index of Filipino households was determined through principal component analysis (PCA) which was based on variables such as household characteristics, household assets, infrastructure factors, and utility access. Scores were designated to each of the household asset and then was used to categorize wealth quintiles as poorest, poorest, middle, rich, and richest. e in-depth methods of measurement and categorization were presented elsewhere [12].
Statistical Analysis.
Stata 15 was used for all statistical analyses performed in this study (Stata Statistical Software, release 15, Stata Corporation 2017). Frequency and percentages were used to present the characteristics of Filipino households. Mean, standard deviation, median, 25 th percentile, and 95 th percentile of food and nutrient intakes of the households were estimated to show the distribution of consumption by the food security level. For dichotomous, ordinal, and nominal categorical data, as well as measurement data, chi-square tests were employed to examine the relationships between household variables and food security levels. Differences in household food and nutrient intakes were compared to food security levels using one-way analysis of variance (ANOVA). Food and nutrient intakes were transformed through natural logarithm function ln(x). Food pattern was analyzed by comparing the distribution of intakes of 10 food groups by the food security level. e diet quality was assessed based on the FCS scoring and percentage contribution of each food group to the total energy intake. Percentage contribution was calculated by summing the total energy for each food group divided by the overall sum of energy from all food multiplied by 100.
To estimate the relationship between dietary intake and food security level while adjusting for confounders, linear regression analysis was used in the association analysis. Unstandardized beta coefficients and 95% confidence intervals were also presented. Logistic regression analysis was applied to determine the odds of food and nutrient inadequacies related food security levels. e odds ratios (OR) and 95% confidence intervals were also presented in this study. Moreover, all models were analyzed both with and without adjustment. Confounder variables were household size, place of residence, sex, education, the household head's occupation, wealth quintile, electricity status, and type of toilet facility. All analyses set the significance level α at 0.05. All analyses were accounted for the sampling weights to reflect nationally representative results.
Result
e study included a total of 9,668 Filipino households, with nearly equal representation from rural and urban areas. e majority of Filipino households (67%) was found to be food insecure, with 12 percent, 32 percent, and 22 percent being slightly, moderately, and severely food insecure, respectively. In terms of family size, more than half (63%) have less than or equal to five family members, while 36% have more than five.
Most of the household's heads were males (79%) and the majority has reached the elementary level (39%) and high school level (35%). About 38% family heads have low-income occupations while 6% have no occupation. e proportion of households was similarly distributed across the wealth quintile. Only 9% of the households do not have electricity and 14% either do not have a toilet or are not water-sealed.
All socioeconomic and demographic characteristics of the households included in this analysis were found to be significantly associated to food security levels. Specifically, seven out of ten (70%) family with ≤5 members were food secure. More than half (57-61%) of the homes in rural areas were moderate and severe food insecure. Forty percent of food secure family was under the richest quintile. irty-five percent of severe food insecure houses were in the poorest quintile and 25% in poor quintile. Moreover, the study found that half (51%) of severely food insecure households have a family head with an elementary education level. Half of the food secure household have a family head with highincome occupations, while almost half (47%) severe food insecure have low-income occupation. Households with no electricity (18%) and no toilet/not water-sealed (27%) have a higher rate of severe food insecurity (Table 5).
Food and Nutrient Intake according to Food Security
Status of the Households. Mostly consumed food were cereals, rice, vegetables, and meat with an average consumption of 1508 g, 1303 g, 528 g, and 708 g, respectively. On the other hand, the least consumed foods were dried beans, nuts and peas (34 g), fats and oils (58 g), starchy roots and tubers (59 g), and sweetened beverages (61 g) ( Table 6).
Among food secure households, cereals, rice, and meat were consumed with an average of 1356 g, 1193 g, and 708 g followed by vegetables (528 g) and milk (198 g). On the other hand, in mild food insecure households, the mean household food consumption cereals, rice, and meat were 1504 g, 1302 g, and 751 g accompanied by vegetables (528 g) and milk (184 g). Commonly consumed food among moderate and severe food insecure households were cereals, rice, and meat with 1590 g, 1357 g, and 637 g among moderate food insecure and 1611 g, 1389 g, and 576 g for severe food insecure households. Milk (moderate: 132 g, severe: 121 g), starchy roots, and tubers (moderate: 58 g, severe: 74 g), sugary sweetened drinks (moderate: 58 g, severe: 52 g), fats and oils (moderate: 56 g, severe: 51 g), and dry beans, nuts, and seeds (moderate: 56 g, severe: 30 g) are the least eaten foods among moderate and severe food insecure household (Table 6).
Journal of Nutrition and Metabolism 5
Tests showed that the moderate and severe food insecure group consumes considerably less meat, milk, and fats and oils than the food secure family. Severe food insecure households were also found to have lower mean intake of fruits and sugary sweetened beverages than food secure households. Moreover, moderate and severe food insecure households spend more cereals and cereal products, rice, and vegetables than food secure homes (Table 6). e ANOVA test revealed a significant mean difference in expenditure across all food groups by food security index (p value<0.05). Multiple comparison tests showed that both moderate and severe food insecure groups have significantly lower mean consumption of meat, milk, and fats and oils compared to the food secure group. Severely food insecure households have lower average intake of fruits and sugary sweetened beverages related to food secure houses. On the other hand, both moderate and severe food insecure houses have a higher utilization of cereals and cereal products, rice, and vegetables contrasted to food secure houses (Table 6).
Overall, the mean household intake of total energy, carbohydrates, protein, and fat were 7607 kcal, 1340 g, 228 g, and 146 g. Iron was 38 mg, 1650 mg for calcium, 1649 µg for vitamin A, 3 mg for thiamin, 3 mg for riboflavin, 76 mg for niacin, and 182 mg for vitamin C (Table 7).
Moderately food insecure households had a higher mean calorie intake as reflected in the higher total carbohydrate, less vitamin A, riboflavin, niacin, and total fat intake as compared with food secure. In addition, severe food insecure family has considerably lower mean calcium and thiamin consumption than food secure homes (Table 7).
ANOVA test showed that there were significant mean nutrient intake differences by the level of food security (p value <0.05) except for iron and vitamin C intake. Moderately food insecure households have higher mean calories related to food secure families. Moderately and severely food insecure households had been found to have higher mean consumption of total carbohydrates, but they have a significantly lower mean consumption of vitamin A, riboflavin, niacin, and total fat compared to food secure households. Severely food insecure households have significantly lower mean intakes of calcium and thiamin compared to households that are food secure (Table 7).
Food Security by Household Food Consumption Classification (FCS).
Almost half (49%) of the households with insufficient food consumption were severe food insecure, while 34% and 20% had borderline and acceptable consumption, respectively. On the other hand, the prevalence of food security was 36% for households with acceptable food consumption, 21% for borderline, and 16% for households with low food expenditure. e level of food consumption score appeared to have no large difference but had a small indication of the trend for moderately food insecure (26-35%) and mild food insecure families (9-13%). To support this statement, chi-square test confirmed that levels of food security were significantly associated with the FCS score categories at 5% level of significance (Table 8).
Food Security by Sources of Foods and Nutrients.
Overall, about 68% of the household total energy consumption was from rice, 14% from meat, 7% from fats and oils, and the other percentages were from contributor food groups such as sweetened beverages, vegetables, milk, fruits, eggs, dried beans, nuts and seeds, and starchy roots and tubers. Rice remained the top 1 contributor of calorie expenditure of Filipino households across food security levels. However, about 74% of the energy intake of severely food insecure families was from rice, and it goes down to 71% for moderately food insecure, to 67% for mildly food insecure, and to 63% for food secure. Moreover, the contribution of meat to the household calorie consumption was 18% for food secure, and this is on a decreasing trend with food insecurity levels: mild, 14%; moderate, 12%; and severe, 10%. e contribution of fat across food security levels seems similar. e remaining groups such as fruits, vegetables, milk, eggs, and dried beans and nuts appear to have very low contributions to the caloric intake of the Filipino households (Table 9).
Regression Analysis.
After adjustment for potential confounders such as household size, place of residence, sex, education and occupation of the family head, electricity status, and type of toilet facility and socioeconomic status, food security level was significantly associated with food consumption score (FCS) and nutrient intake of Filipino household except for total carbohydrates and vitamin A intakes. Riboflavin intake significantly declined by −0.14 (−0.25, −0.02) in the severe food insecure households. Niacin intake significantly decreased by −3.12 (−5.2, −1) and −5.9 (−8.3, −3.5) for moderately and severely food insecure households, respectively. Vitamin C intake diminished by −18.2 (−30.5, −6) for severely food insecure related to food secure households. Food security level appears to be not related to the change in the consumption of total carbohydrates and Vitamin A (Table 10).
Model 1 was adjusted to account for household size, place of residence, and sex of the household head. Results showed that the likelihoods of poor/borderline FCS e probability of inadequacy of thiamin was 1.64 times higher among severely food insecure (95% CI: 1.45-1.87), whereas for moderately and mildly food insecure, the odds increased by 1.56 (95% CI: 1.39-1.75) and 1.23 (95% CI: 1.1-1.42), respectively, compared to food secure homes. Severely food insecure group was 2.15 times more likely to be inadequate of riboflavin in connection to food secure families (95% CI:
Food Group Consumption according to Food Security
Status.
e present study revealed that households that are moderately and severely food insecure were found to have higher mean consumption of cereals and cereal products, rice, vegetables, and starchy roots and tubers while having lower expenditure of fruits, meat, fish, and poultry, as well as milk and milk products, in comparison with food secure households. e results of the study were in conjunction with previous literature which found that food insecure families consumed more carbohydrate-rich foods [13,14] and less animal source foods, protein-rich food, dairy products, and fruits [15,16]. According to previous research, this could be attributed to the occurrence that, at lower income levels, households tend to consume more cereals as it is a cheap source of calories [17]. is is supported by a major hypothesis of previous research on food insecurity and diet, which suggests that food insecurity may result in a "substitution effect" where higher quality and/or less caloriedense foods (including produce and lean sources of protein) are replaced with more energy-dense foods (often high in simple carbohydrates) that are less expensive as per-calorie basis [18]. us, given the lower cost of calorie-dense foods such as rice as well as starchy roots and tubers, food insecure households would more likely be incentivized to these, while consuming fewer amounts of nutrient-dense foods such as protein-rich foods that include meat, fish, poultry, and milk as well as fruits that are rich in micronutrients [19,20]. Yet, on the contrary, the present study found higher consumption of vegetables which are nutrient-dense among food insecure households than the latter. Moreover, this study found a lower fat intake among food insecure households, which is different from the results of previous studies in western populations where it has been observed that food insecure households are more likely to consume high-fat foods due to a lack of resources [21][22][23][24][25][26], this could suggest the impact of geographical location on food consumption. In terms of the high consumption of rice, a possible explanation for this is that rice is a staple food for Filipinos.
us, Filipinos living in moderate and severely food insecure households obtain their energy intake majorly from carbohydrates primarily rice rather than protein and fat sources. It is a dogma in the Philippines, especially among the lowest economic status households that rice supply connotes food security.
In terms of food group consumption and expenditures, an investigation regarding the relationship between food insecurity and overall daily capital (DPC) intake was conducted in a previous report which involved Bolivia, Burkina Faso, and the Philippines. e previous study found that, for food-secure households, the overall DPC food expenditure, as well as expenditure on animal goods, fruits, and fats and oils, was slightly greater (p � 0.05) in comparison with both households that are moderately and severely food insecure [27]. Poverty which is the common cause of food insecurity is stated in prior research to make consumers even more sensitive to changes in income and food prices because they do not have any safety nets in order to absorb income or price shocks when they purchase [28]. is is in line with the results of the 2018 eNNS in the Philippines, wherein the poorest households spent 42% of their total food purchases on energy-giving foods and 38% on body-building food, while the richest household spent more than half of their food purchase on body-building food which is more expensive and only spent 29% on energy-giving food [16]. Although in the present study, the association of food expenses to household food insecurity was not analyzed.
Food Consumption Score according to Food Security
Status. Delving into one of the indicators of food security assessed in this study, food security is found to be significantly associated with food consumption score (FCS) (Figure 1). Moreover, a significant association was found between severe food insecure households and reduced food Journal of Nutrition and Metabolism consumption scores. e present study revealed that almost half (49%) of severely food insecure households have poor food consumption, and an increased likelihood of poor food consumption scores (FCS <42%) is becoming more prevalent as the degree of household food insecurity worsens. Furthermore, poor food consumption was more pronounced in households that were moderately or severely food insecure.
A defining characteristic of food insecurity is limited or uncertain accessibility to sufficient food [29]. Moreover, according to a previous study, poor food consumption could be linked to the attributes of food insecure households characterized as those having a lower monthly per capita income, less desirable jobs, poor housing conditions, and lower levels of education, all of which can have an impact on their dietary intake [30]. ese factors are stated by several previous studies to contribute towards poor/less food accessibility and availability [31,32]. is could be also explained by the coping mechanism of food insecure households to poverty by reducing the quantity of food consumed to sustain their energy needs [33] or resorting to food shopping practices that are driven by efforts to reduce costs of food expenses [34] which could lead to poor food consumption.
Energy and Nutrient Intakes according to Food Security
Status. Regarding nutrient intakes, household food insecurity, specifically, severely food insecure households, were found to be significantly associated with reduced consumption of total energy, total protein, total fat, calcium, iron, thiamin, riboflavin, niacin, and vitamin C except for total carbohydrates and vitamin A. Moreover, the higher severity of household food insecurity significantly increases the prevalence of inadequate total energy, total protein, calcium, vitamin A, thiamin, riboflavin, niacin, and vitamin C intake. Limited food accessibility, if prolonged, may explain the decline in nutrient intake observed among food insecure households and how it negatively affects nutritional status [35]. Indeed, the present study confirms findings of prior research, wherein it was suggested that household food insecurity is a marker of nutritional vulnerability which increases the susceptibility to nutrient inadequacies [14,36,37]. A previous study in Canada has shown compromised nutrient intake among food insecure households who are struggling with food sources [38]. In relation to the food group groups consumed, the reduced nutrient intakes on total protein, iron, and B vitamins may be ascribed to the lower consumption of meat, poultry, and milk products among food insecure households since these are major food sources for the nutrients aforementioned. e inadequate intake of fruits is also reflected in the reduced vitamin C intake among food insecure households. In a previous study, higher prevalence of deficiencies in nutrients such as protein, vitamin A, thiamin, riboflavin, vitamin B-6, folate, vitamin B-12, magnesium, phosphorus, and zinc are found among individuals living in food insecure households [39]. us, the present study suggests that food insecure households consume diets that are of poor nutrient quality which presupposes them to nutrient deficiencies. Inadequate nutrient intakes can adversely affect adults' [40,41] and children's [37,42,43] health and well-being. is imposes the need for interventions targeting household food insecurity, particularly focusing on energy and nutrient intakes.
Contribution of Food Sources to Energy Intake according to
Food Security Status. Pertaining to the percentage contribution of each food group to the total energy intake of Filipino households, rice remained as the major energy source regardless of the household food security level. Despite the fact that rice is the cheapest and most effective way to maintain a sustainable energy intake, it is considered nutritionally undesirable [44]. Moreover, rice-based diets are related to vitamin and iron deficiencies which in turn affect long-term food security [45]. e contribution of meat based on household calorie consumption was also found to be higher among food secure households (18%), and an alarmingly decreasing trend was observed with food insecurity levels: mild (14%), moderate (12%), and severe (10%).
is is consistent with a previous study, which discovered that food secure households consume more meat than households that are food secure [46]. Meat could be also less consumed among food insecure households since it is more expensive than other food items [47]. e remaining food groups such as fruits, vegetables, milk, eggs, and dried beans and nuts appear to have very low contribution to the caloric consumption of both households that are food secure and food insecure (Table 8).
ese findings connote that the present study may affirm previous literature which stated that when food is available, low-income households which suffer from food insecurity consume monotonous meals that are low in quality, cerealbased, and bereft of vegetables, fruit, and animal source foods, raising the risk of micronutrient deficiencies [48][49][50]. Monotonous diet which being reflected in the results of the contribution of food groups to energy intakes are found to be closely associated with food insecurity [49] resulting in malnutrition. Fruits and vegetables which were also found among the least consumed foods that contribute to a household's energy intake are also nutritionally beneficial since it is both rich in vitamins and minerals such as folate, vitamin A, vitamin C, and carotenoids [51] as well as dietary fiber and phytochemicals [52].
Conclusions
Household food insecurity was associated with dietary patterns among Filipinos.
is is reflected in the higher consumption of calorie-dense foods among Filipino households experiencing moderate and severe food insecurity. is explains the results that lower nutrient quality and a higher likelihood of nutrient inadequacy or micronutrient deficiencies are observed in these households. Since food insecurity and dietary pattern are intertwined because both are economic issues, programs and policies addressing food insecurity in the Philippines may need to take steps to improve the whole context of the supply chain for products to be more available and accessible at a more affordable cost to improve quality and quantity of consumed food.
Data Availability
e dataset used for this study can be requested via an online application from the Department of Science and Technology, Food and Nutrition Research Institute's official website (http://enutrition.fnri.dost.gov.ph/site/puf-preview.php?xx �201596).
Conflicts of Interest
e authors declare that there are no conflicts of interest. | 7,344.4 | 2021-11-03T00:00:00.000 | [
"Agricultural and Food Sciences",
"Environmental Science",
"Economics"
] |
Amelioration of human peritoneal mesothelial cell co-culture-evoked malignant potential of ovarian cancer cells by acacetin involves LPA release-activated RAGE-PI3K/AKT signaling
Background Ovarian cancer is a devastating gynecological malignancy and frequently presents as an advanced carcinoma with disseminated peritoneum metastasis. Acacetin exerts anti-cancerous effects in several carcinomas. Here, we sought to investigate acacetin function in ovarian cancer malignancy triggered by peritoneal mesothelial cells. Methods Peritoneal mesothelial cells were treated with acacetin, and then the conditioned medium was collected to treat ovarian cancer cells. Then, cell proliferation was analyzed by MTT assay. Transwell analysis was conducted to evaluate cell invasion. Protein expression was determined by western blotting. ELISA and qRT-PCR were applied to analyze inflammatory cytokine levels. The underlying mechanism was also explored. Results Acacetin suppressed cell proliferation and invasion, but enhanced cell apoptosis. Furthermore, mesothelial cell-evoked malignant characteristics were inhibited when mesothelial cells were pre-treated with acacetin via restraining cell proliferation and invasion, concomitant with decreases in proliferation-related PCNA, MMP-2 and MMP-9 levels. Simultaneously, acacetin reduced mesothelial cell-induced transcripts and production of pro-inflammatory cytokine IL-6 and IL-8 in ovarian cancer cells. Mechanically, acacetin decreased lysophosphatidic acid (LPA) release from mesothelial cells, and subsequent activation of receptor for advanced glycation end-products (RAGE)-PI3K/AKT signaling in ovarian cancer cells. Notably, exogenous LPA restored the above pathway, and offset the efficacy of acacetin against mesothelial cell-evoked malignancy in ovarian cancer cells, including cell proliferation, invasion and inflammatory cytokine production. Conclusions Acacetin may not only engender direct inhibition of ovarian cancer cell malignancy, but also antagonize mesothelial cell-evoked malignancy by blocking LPA release-activated RAGE-PI3K/AKT signaling. Thus, these findings provide supporting evidence for a promising therapeutic agent against ovarian cancer. Graphical Abstract
Background
Ovarian cancer is the most lethal gynecological malignancy in the female reproductive tract, and is known as the fifth deadliest cancer worldwide [1]. Notably, epidemiologic research corroborates approximately 22,240 new diagnosed cases and 14,070 ovarian cancer deaths in the United States [2]. There is a steadily increasing incidence of ovarian cancer in the UK today, especially in women aged 65 and over [3]. Currently, high incidence and mortality of ovarian cancer constitute a proverbial obstacle for global health [1,2]. Though advances in conventional therapy for ovarian cancer comprised surgery, radiotherapy and chemotherapy, more than 60% of patients are diagnosed with advanced disease [1]. Approximately 50-85% of patients with advanced ovarian cancer have a poor prognosis and experience recurrence within 5 years due to high metastasis characteristics, leading to an approximate median survival time of 2 years [1,2]. It is a fact that high mortality of ovarian cancer often mainly results from the occult progression in the peritoneal cavity due to the preferential metastasis to the peritoneal cavity that constitutes a widely known condition named peritoneal ovarian carcinomatosis [4,5]. Metastasis to the peritoneum is a critical step for the progression of ovarian cancer, based on the fact that it provides a nutrient-rich tumor microenvironment (TME) consisting of various cell types, such as fibroblasts and mesothelial cells [6,7]. Initial research regarding TME usually focused on fibroblasts [7]. Recently, increasing evidence has confirmed a critical contributor of peritoneal mesothelial cells to the development of ovarian cancer [6,8]. Mesothelial cells rank as the major cell population within the peritoneum covering the superficial area [9,10]. Emerging evidence has suggested that mesothelial cells can facilitate the progression of ovarian cancer by promoting multiple tumorigenic processes, including cell proliferation, invasion, migration and adhesion [8,10,11]. Therefore, there is an urgent need to elucidate the interplay and underlying mechanism between peritoneal mesothelial cells and ovarian cancer cells for cancer prevention.
An increasing body of attention has focused on the potential application of natural products that act as promising cancer therapeutic agents [12,13]. Acacetin (5,7-dihydroxy-4ʹ-methoxyflavone) (Fig. 1A) is a common flavonoid compound that widely exists in plants, vegetables, seeds and flowers. Notably, previous findings have demonstrated that acacetin possesses anti-ischemia/reperfusion injury, anti-inflammatory and antioxidative activity [14,15]. Recently, increasing evidence has indicated that acacetin exhibits anti-cancerous efficacy in several cancers, including skin cancer [16], breast cancer [17] and prostate cancer [13]. In particular, administration with acacetin restrains tumor angiogenesis and growth in ovarian cancer [18]. Nevertheless, little research is focused on its roles in the tumor microenvironment.
In the present study, we sought to investigate the efficacy of acacetin in peritoneal mesothelial cell-facilitated malignant potential in ovarian cancer cells. Additionally, the potential molecular mechanism was also elucidated.
Cell culture
The normal human ovarian surface epithelial cell line IOSE80, the human mesothelium cell line Met-5 A and the ovarian cancer cell line SKOV3 were bought from the American Type Culture Collection (ATCC; Manassas, VA, USA). For culture, the SKOV3 cells were maintained in RPMI-1640 medium containing 10% fetal bovine serum (FBS) (Thermo Fisher Scientific, Waltham, MA, USA) and 50 U/ml penicillin/streptomycin. The Met-5 A cells were grown in Dulbecco's modified Eagle medium (DMEM)/F12 medium supplemented with 10% FBS, hydrocortisone (0.1 µg/ml), 50 U/ml penicillin/ streptomycin, insulin (2.5 µg/ml) and 5 ng/ml EGF. All cells were housed in a humidified 5% CO 2 atmosphere at 37 °C.
Assay of cell apoptosis by flow cytometer
After being seeded in 6-well plates, ovarian cancer cells were incubated with 5 µM, 1 µM and 20 µM of acacetin for 24 h. Then, cell apoptosis was assessed by an annexin V-FITC apoptosis analysis kit (Beyotime, Nantong, China). Briefly, the collected cells were centrifuged and re-suspended in 195 µl of binding buffer. Then, cells were incubated with 5 µl of annexin V-FITC and 10 µl PI at room temperature, avoiding light. Approximately 20 min later, a flow cytometer (BD Biosciences, CA) was introduced to discriminate cell apoptosis.
Evaluation of cell invasion
To analyze cell invasion, a Matrigel-coated transwell chambers with 8 μm pore-size polycarbonate filters (BD Biosciences, Bedford, MA, USA) was applied. In brief, cells under acacetin, LPA and conditioned medium were collected and re-suspended in serum-free medium. After that, cells (1 × 10 5 cells) were added to the upper chamber with Matrigel (1.5 mg/ml)-precoated transwell inserts, and invasion was allowed to occur. The lower chamber was supplemented with medium containing 10% FBS. Then, non-invading cells were removed from the upper chamber with a cotton swab. The invading cells were then fixed and stained with 0.1% crystal violet, and counted using a light microscope (× 200) in five fields/filter. All experiments were performed independently in triplicate. Immunoblotting Cells that were treated with the indicated conditions were collected and lysed with RIPA lysis buffer. After centrifugation at 4 °C for 10 min, the extracted protein concentration was quantified using a BCA kit (Beyotime). Subsequently, 30 µg of protein was resolved by SDS-PAGE and subsequently transferred to a PVDF membrane (Millipore, Billerica, MA, USA). To prevent non-specific binding, the membrane was incubated with 5% non-fat milk. Then, the primary antibodies against PCNA, MMP-2, MMP-9, receptor for advanced glycation end-products (RAGE), p-AKT, AKT, p-PI3K and PI3K (all from Abcam, Cambridge, MA, USA) were added for further incubation at 4 °C overnight. Following rinsing with TBST three times, the membrane was treated with goat anti-rabbit secondary antibodies conjugated to horseradish peroxidase at room temperature for 2 h. The binding signal was visualized when exposed to chemiluminescence reagent (ECL, Beyotime). For normalization, β-actin was applied as an internal standard. The intensities of bands were then quantified using Image J software.
RNA extraction and qRT-PCR analysis
After collection from various groups, total RNA from cells was prepared using the TRIzol reagent (Sigma). Then, the extracted total RNA was primed with oligo (dt) and reverse transcribed to synthesize the first-strand cDNA using a commercial SuperScript II First Strand Synthesis System Kit (Invitrogen, CA, USA). Afterwards, transcriptional levels of IL-6 and IL-8 were analyzed by real-time PCR on an Applied Biosystems 7300 Real-Time PCR System (Applied Biosystems; Foster City, CA, USA). All protocols were carried out according to instructions provided by a SYBR Premix Ex Taq II Kit (TaKaRa, Dalian, China). The specific primers for these genes were as follows: IL-6 (sense, 5ʹ-GAC CAC ACT TGG AGG TTT AAGG-3ʹ; anti-sense, 5ʹ-CCA CTG ATC TGG TGG TGT AAAG-3ʹ), IL-8 (sense, 5ʹ-TTC ACT GCT CTG TCG TAC TTTC-3ʹ; anti-sense, 5ʹ-CAC ACC AAG GAA GGG TTC TTAT-3ʹ), and β-actin (sense, 5ʹ-TCC CTG GAG AAG AGC TAT GA-3ʹ; anti-sense, 5ʹ-CAG GAA GGA AGG CTG GAA A-3ʹ). β-actin was introduced as an endogenous control to calculate target genes using the 2 −ΔΔCT method.
ELISA assay
The supernatants from mesothelium cells and ovarian cancer cells were prepared after sonicating and centrifuged at 4 °C. Then, the contents of LPA in supernatants were measured using a Human lysophosphatidic acid (LPA) ELISA kit (Cusabio, Wuhan, China). Commercially available IL-6 and IL-8 ELISA kits (Invitrogen) were used to determine the levels of IL-6 and IL-8 in supernatants from ovarian cancer cells. All procedures were conducted according to the instructions of manufacturers.
Statistical analysis
Results from at least three independent experiments are shown as mean ± SD. Statistical comparisons were performed by SPSS19.0, and determined using Student's t-test for two groups and ANOVA with the post-hoc Student-Newman-Keuls test for three or more groups. The criterion for statistical significance was defined as P <0.05.
Acacetin restrains ovarian cancer cell growth and invasion
To elucidate the function of acacetin in the microenvironment of ovarian cancer, we first evaluated cytotoxicity of acacetin on the normal ovarian surface epithelial cell line IOSE80 and found that acacetin had little cytotoxicity to IOSE80 cells with increasing doses (Fig. 1B). As presented in Fig. 1C, acacetin dose-dependently inhibited ovarian cancer cell viability with IC 50 at 21.63 µM at 24 h and 13.65 µM at 48 h. Furthermore, exposure to acacetin dose-dependently suppressed ovarian cancer cell apoptosis relative to the control group (Fig. 1D). Transwell assay corroborated that cells treated with 5-20 µM acacetin restrained the number of invading cells in SKOV3 cells (Fig. 1E). Thus, these data indicate that acacetin may suppress the malignant progression of ovarian cancer.
Acacetin incubation antagonizes mesothelial cell conditioned medium-induced pro-growth and invasion potential in ovarian cancer cells
Convincing evidence substantiates carcinogenesis of mesothelial cells in the progression of ovarian cancer [8,20]. We therefore next investigated the effect of acacetin on mesothelial cell-evoked malignant potential in ovarian cancer cells. As shown in Fig. 2A, conditioned medium (CM) from mesothelial cells increased cell viability approximately 223.6%-fold relative to the control group. Intriguingly, this up-regulation was abrogated when cells were incubated with CM from acacetin-treated mesothelial cells. Notably, there was no obvious difference between CM/10 µM acacetin and CM/20 µM acacetin groups. Concomitantly, CM exposure elevated the protein levels of proliferation-related PCNA in contrast to the control group, which was overturned after incubation with CM from 10 µM acacetin-treated mesothelial cells (Fig. 2B). Moreover, acacetin stimulation attenuated CM-induced ovarian cancer cell invasion (Fig. 2C, D), concomitant with the decrease in the protein levels of MMP-2 and MMP-9 (Fig. 2E, F).
Treatment with acacetin offsets the transcription and release of inflammatory cytokine in response to CM from peritoneal mesothelial cells
As presented in Fig. 3A, ovarian cancer cells that were cultured in CM from peritoneal mesothelial cells exhibited increased mRNA levels in pro-inflammatory cytokine IL-6, which was offset after acacetin treatment. Simultaneously, acacetin-treated CM mitigated the production of IL-6 in ovarian cancer cells under CM conditions (Fig. 3B). Additionally, incubation with CM also enhanced the transcription (Fig. 3C) and release (Fig. 3D) of IL-8 in ovarian cancer cells. However, these increases were both weakened when cells were incubated with CM from acacetin-treated mesothelial cells.
Administration with acacetin suppresses LPA releases from human peritoneal mesothelial cells
Accumulation evidence supports the critical roles of LPA in peritoneal mesothelial cellmediated malignant progression of ovarian cancer cells [10,20]. Therefore, we explored the effects of acacetin in LPA production from peritoneal mesothelial cells. It was observed that the acacetin treatment obviously inhibited LPA production in conditioned medium collected from peritoneal mesothelial cells (Fig. 4A).
Acacetin inhibits CM-induced activation of RAGE-PI3K/AKT signaling in ovarian cancer cells
A previous study confirmed the crucial function of LPA in tumor growth in ovarian cancer by RAGE signaling. Thus, we further investigated the involvement of RAGE signaling during these processes. Importantly, CM incubation enhanced the protein expression of RAGE (Fig. 4B, C), as well as the down-stream p-PI3K (Fig. 4B, D) and p-AKT expression (Fig. 4B, E) in ovarian cancer cells, whereas no significant differences in PI3K and AKT protein levels were confirmed when cancer cells were incubated with CM (Fig. 4F). These findings suggest the inhibitory effects of acacetin on CM-activated RAGE-PI3K/ AKT signaling.
Exogenous supplementation of LPA overturns the effects of acacetin on mesothelial cell-evoked malignant potential in ovarian cancer cells
To further decipher the involvement of LPA in acacetin function against mesothelial cell-evoked malignant potential in ovarian cancer cells, exogenous LPA was applied. Intriguingly, acacetin pretreatment restrained the expression of RAGE, p-PI3K and p-AKT protein in CM-stimulated ovarian cancer cells, which was reversed after supplementation with exogenous LPA at 5 µM and 10 µM (Fig. 5A, B). Moreover, the inhibitory efficacy of acacetin in CM-induced cell proliferation (Fig. 5C) and invasion (Fig. 5D) was overturned when cancer cells were incubated with CM containing added LPA. Simultaneously, LPA supplementation in CM abrogated acacetin-mediated suppression in CM-evoked elevation of mRNA levels (Fig. 5E) and release (Fig. 5F) of pro-inflammatory cytokines IL-6 and IL-8.
Discussion
Currently, increasing insights have highlighted the promising therapeutic potential of natural products in cancer treatment, such as flavonoids [12]. Acacetin is a common natural plant-derived flavonoid compound, and exerts multiple medicinal benefits effects, including anti-oxidant, anti-neuronal injury and anti-inflammation [14,15]. Intriguingly, accumulating evidence supports the anti-cancerous effects. For instance, acacetin treatment suppressed breast cancer cell growth [17]. In the current study, our findings revealed that acacetin suppressed ovarian cancer cell proliferation and invasion, but enhanced cell apoptosis. Intriguingly, a previous study confirmed the inhibitory effects of acacetin on angiogenesis and tumor growth in ovarian cancer [18]. Therefore, these data indicate that acacetin may act as a potential therapeutic agent for ovarian cancer prevention. Ovarian cancer frequently presents as an advanced carcinoma with disseminated intra-abdominal metastasis, which is the major factor for patients to have a poor prognosis [4,6]. Ovarian cancer predominantly undergoes a transcoelomic metastasis, where the primary tumor spreads throughout the peritoneal cavity [21]. Peritoneum metastasis marks a key step for tumor development because that peritoneum supports a nutrientrich microenvironment for shed ovarian cancer cells [6]. Mesothelial cells are the major constituents of peritoneum covering the surface of the peritoneal cavity, and rank as the most abundant cell type in ascites of patients [22]. Recently, emerging evidence has confirmed the pro-tumor characteristics of mesothelial cells in the progression of ovarian cancer by enhancing cancer cell proliferation, invasion and adhesion [5,10,23]. Furthermore, mesothelial cells facilitate intraperitoneal invasiveness of ovarian malignancy and promote early ovarian cancer metastasis [8,23]. Therefore, mediating the function of mesothelial cells in the microenvironment has become a new subject in ovarian cancer treatment. Analogously with previous research [10,23], our findings corroborated the pro-proliferation and invasion potential of mesothelial cells in ovarian cancer. Intriguingly, acacetin antagonized mesothelial cell-evoked proliferation and invasion in ovarian cancer cells, concomitant with the decrease in MMP-2 and MMP-9 expression. Thus, acacetin may attenuate mesothelial cell-induced malignant potential in ovarian cancer cells. Accumulating evidence has substantiated the involvement of the inflammatory response in the progression of cancer, including ovarian cancer [24]. Production of proinflammatory cytokine in ascites contributes to a more aggressive tumor phenotype [24,25]. Notably, the present study confirmed that mesothelial cells enhanced the transcription and release of pro-inflammatory cytokines IL-6 and IL-8 in ovarian cancer cells. Recent findings demonstrated that high levels of IL-6 and IL-8 in peritoneal fluid are related to poor prognosis in ovarian cancer patients, confirming them as new prognostic biomarkers in ovarian cancer [26]. Moreover, IL-6 and IL-8 treatment enhances proliferation and metastatic potential of ovarian cancer cells, and facilitates ovarian cancer aggressiveness [27,28]. Of interest, acacetin overturned mesothelial cell-evoked production of these two inflammatory cytokines.
Intriguingly, we next confirmed the high levels of LPA in conditioned medium of mesothelial cells. Like cancer-associated fibroblasts, human peritoneal mesothelial cells also secrete factors to facilitate tumor progression [6]. LPA is known as an essential microenvironmental factor in ovarian cancer, and increases in malignant ascites of ovarian cancer patients [29,30]. Notably, a recent study corroborated the constitutive release of LPA from human peritoneal mesothelial cells, which can promote ovarian cancer malignancy by enhancing cell proliferation, invasion, migration and adhesion [10]. Additionally, LPA exposure induces pro-inflammatory cytokine production in ovarian cancer cells [19,20,25]. Here, treatment with acacetin suppressed LPA release from mesothelial cells.
Next, LPA release in conditioned medium from peritoneal mesothelial cells activated the RAGE-PI3K/AKT signaling in ovarian cancer cells. Previous research confirmed the overexpression of RAGE in ovarian cancer, suggesting it to be a useful biomarker of ovarian cancer prognosis [31]. Intriguingly, RAGE has previously been identified as a new receptor for LPA in ovarian cancer growth and oncogenic characteristics in glioma cells [32]. Activation of PI3K/AKT signaling by RAGE is involved in the initiation, chemoresistance and metastasis of cancer, including ovarian cancer [33,34]. Moreover, the PI3K/AKT pathway has been proved to be associated with ovarian cancer-mesothelial adhesion [35]. Importantly, restoring the RAGE-PI3K/AKT signaling by exogenous supplementation with LPA offset the effects of acacetin against mesothelial cell-evoked proliferation, invasion and inflammatory cytokine production in ovarian cancer cells. Therefore, these findings suggest that acacetin may attenuate mesothelial cell-evoked malignant potential in ovarian cancer cells. Nevertheless, LPA also plays critical roles in carcinogenesis by binding to its receptors [36]. Thus, a further study will be performed to investigate the involvement of LPA receptor in acacetin-mediated anti-tumor efficacy in ovarian cancer microenvironment.
In summary, the present findings revealed that acacetin suppressed ovarian cancer cell growth and invasion. Additionally, treatment with acacetin also antagonized peritoneal | 4,106.2 | 2021-12-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Pangenome graph layout by Path-Guided Stochastic Gradient Descent
Abstract Motivation The increasing availability of complete genomes demands for models to study genomic variability within entire populations. Pangenome graphs capture the full genomic similarity and diversity between multiple genomes. In order to understand them, we need to see them. For visualization, we need a human-readable graph layout: a graph embedding in low (e.g. two) dimensional depictions. Due to a pangenome graph’s potential excessive size, this is a significant challenge. Results In response, we introduce a novel graph layout algorithm: the Path-Guided Stochastic Gradient Descent (PG-SGD). PG-SGD uses the genomes, represented in the pangenome graph as paths, as an embedded positional system to sample genomic distances between pairs of nodes. This avoids the quadratic cost seen in previous versions of graph drawing by SGD. We show that our implementation efficiently computes the low-dimensional layouts of gigabase-scale pangenome graphs, unveiling their biological features. Availability and implementation We integrated PG-SGD in ODGI which is released as free software under the MIT open source license. Source code is available at https://github.com/pangenome/odgi.
Introduction
Reference genomes are widely used in genomics, serving as a foundation for a variety of analyses, including gene annotation, read mapping, and variant detection (Singh et al. 2022).However, this linear model is becoming obsolete given the accessibility to hundreds or even thousands of high-quality genomes.A single genome cannot fully represent the genetic diversity of any species, resulting in reference bias (Ballouz et al. 2019).In contrast, a pangenome models the entire set of genomic elements of a given population (Tettelin et al. 2008, Computational Pan-Genomics Consortium 2018, Eizenga et al. 2020, Sherman and Salzberg 2020).Pangenomes can be represented as a sequence graph incorporating sequences as nodes and their relationships as edges (Hein 1989).In the variation graph model (Garrison et al. 2018), genomes are encoded as paths traversing the nodes in the graph.
A graph layout is the arrangement of nodes and edges in an N-dimensional space.Graph layout algorithms aim to find optimal node coordinates in order to minimize overlapping nodes or edges, reduce edge crossings, and promote an intuitive understanding of the graph.One popular approach is force-directed graph drawing (Cheong and Si 2022) which uses physical simulation to produce esthetic layouts.The classical approach combines repulsive forces on all vertices and attractive forces on adjacent vertices.This is prone to get stuck in local minima, but multi-layer strategies such as the Fast Multipole Multilevel Method (FM 3 ) (Hachul and J€ unger 2005) or Stochastic Gradient Descent (SGD) implementations alleviate such a problem (Zheng et al. 2019).SGD uses the gradient of its individual terms to approximate the gradient of a sum of functions.
A pangenome graph layout can provide a human-readable visualization of genetic variation between multiple genomes.However, Zheng et al. (2019)'s algorithm has a quadratic up front cost in the number of nodes to find pairwise distances to guide the layout, making it impossible to apply to pangenome graphs with millions of nodes.Also, existing generic graph layout approaches ignore the biological information inherent in pangenome graphs.One such bioinformatics tool is BandageNG, the current state of the art for genome graph visualization.It uses FM 3 which only considers the nodes and edges of a graph.
In practice, MultiDimensional Scaling (MDS) is applied to minimize the difference between the visual distance and theoretical graph distance.This can be accomplished by using pairwise node distances to minimize an energy function.Since pangenome graphs represent genomes as paths in the graph, a reasonable distance metric would be the nucleotide distance between a pair of nodes traversed by the same path.Such path sampling would overcome the quadratic costs of previous versions of graph drawing by SGD.
Typically, force-directed layouts are hard to compute (Wang et al. 2014).Although, BandageNG applies FM 3 for layout generation, its parallelism is bound by the number of connected graph components.Alternatively, the lock-free HOGWILD! method offers a highly parallelizable and thus scalable SGD approach that can be applied when the optimization problem is sparse (Recht et al. 2011).
Here, we present a new pangenome graph layout algorithm which applies a Path-Guided SGD (PG-SGD) to use the paths as an embedded positional system to find distances between nodes, moving pairs of nodes in parallel with a modified HOGWILD! strategy.The algorithm computes the pangenome graph layout that best reflects the nucleotide sequences in the graph.To our knowledge, no generic graph layout algorithm takes into account such path encoded biological information when computing a graph's layout.
PG-SGD can be extended in any number of dimensions.In the ODGI toolkit (Guarracino et al. 2022), we provide implementations for 1D and 2D layouts.These algorithms have already been successfully applied to construct and visualize large-scale pangenome graphs of the Human Pangenome Reference Consortium (HPRC) (Guarracino et al. 2023, Liao et al. 2023).In addition, we show that PG-SGD is almost an order of magnitude faster than BandageNG.
Algorithm
While PG-SGD is inspired by Zheng et al. (2019), we designed the algorithm to work on the variation graph model (Definition 2.1).
Definition 2.1.Variation graphs are a mathematical formalism to represent pangenome graphs (Garrison 2019).In the variation graph G ¼ ðV; E; PÞ, nodes (or vertices) V ¼ v 1 . . .v jVj contain nucleotide sequences.Each node v i has a unique identifier i and an implicit reverse complement � v i .The node strand o represents the node orientation.Edges E ¼ e 1 . . .e jEj connect ordered pairs of node strands (e i ¼ ðo a ; o b Þ), defining the graph topology.Paths P ¼ p 1 . . .p jPj are series of connected steps s i that refer to node strands in the graph (p i ¼ s 1 . . .s jpij ); the paths represent the genomes embedded in the graph.
We report PG-SGD's pseudocode in Algorithm 1 and its schematic in Fig. 1.In brief, the algorithm moves one pair of nodes ðv i ; v j Þ at a time, minimizing the difference between the layout distance ld ij of the two nodes and the nucleotide distance nd ij of the same nodes as calculated along a path that traverses them.In the 2D layouts, nodes have two ends.When moving a pair of nodes, we actually move one end of each node.For clarification, an example is given in Fig. 1. v i is the node associated with the step s i sampled uniformly from all the steps in P. v j is the node associated with the step s j sampled from the same path of s i by drawing a uniform or a Zipfian distribution (Zipf 1932).The difference between nd ij and ld ij guides the update of the node coordinates in the layout.The magnitude r of the update depends on the learning rate μ.The number of iterations steers the annealing step size η which determines the learning rate μ.A large η in the first iterations leads to a globally linear (in 1D) or planar (in 2D) layout.By decreasing η, the layout adjustments become more localized, ensuring that the nodes are positioned to best reflect the nucleotide distances in the paths (i.e. in the genomes).
Originating from empirical inspection of word frequency tables, Zipf's law states that a word with rank n occurs 1=n times as the most frequent one.This law is modeled by the Zipf distribution.Sampling s j from a Zipf distribution fixed in the s i' s path position space increases the possibility to draw a nucleotide position close to s i .So there is a high chance to use small nucleotide distances nd ij to refine the layout of nodes comprising a few base pairs.The Zipf distribution is also long-tailed, with many occurrences of low frequency events.However, extremely long-range correlations might not be captured sufficiently, resulting in collapsed layouts for structures that are otherwise linear.To provide balance between global and local layout updates, in half of the updates (flip flag in Algorithm 1), the s j is sampled uniformly instead from a Zipf distribution, with uniform sampling being more favorable for global updates.Furthermore, to enhance local linearity (in 1D) or planarity (in 2D) of the graph layout, a cooling phase skews the Zipfian distribution after half of iterations have been completed.This increases the likelihood of sampling smaller nucleotide distances for the layout updates.
Implementation
We implemented PG-SGD in ODGI (Guarracino et al. 2022): the 1D version can be found in odgi sort and the 2D version in odgi layout.To efficiently retrieve path nucleotide positions, we implemented a path index.This index is a strict subset of the XG index (Garrison et al. 2018) where we avoid to use succinct SDSL data structures (Gog et al. 2014).Instead, we rely on bit-compressed integer vectors, enabling efficient retrieval of path nucleotide positions to quickly compute nucleotide distances without having to store all pairwise distances between nodes in memory.This approach ensures to scale on large pangenome graphs representing thousands of whole genomes.
Graph layout initialization can significantly influence the quality of the final layout.In the 1D implementation, by default, nodes are placed in the same order as they appear in the input graph, although we also provide support for random layout initialization.In 2D, we offer several layout initialization techniques.One approach places nodes in the first layout dimension according to their order in the input graph, adding either uniform or Gaussian noise in the second dimension.Another strategy arranges nodes along a Hilbert curve, an approach that often favors the creation of planar final layouts.We also support fixing node positions to keep nodes in the same order as they are in a selected path, such as a reference genome.This feature allows us to build reference-focused graph layouts (Supplementary Fig. S1d).
Our implementation is multithreaded and uses shared memory for storing the layout in a vector, according to the HOGWILD!strategy (Recht et al. 2011).Threads perform layout updates without any locking for additional speed up.This approach is feasible since pangenome graphs are typically sparse (Guarracino et al. 2022), with low average node degree.As a result, the updates only modify small parts of the entire layout.While the HOGWILD!SGD algorithm writes the layout updates to a shared non-atomic double vector, PG-SGD stores node coordinates in a vector of atomic doubles.This vector prevents any potential memory overwrites.Our tests revealed basically no performance loss with respect to the non-atomic counterpart.
Performance
We apply the 2D PG-SGD to the human pangenome (Liao et al. 2023) from the HPRC to show the scalability of the algorithm.Experiments were conducted on a cluster with 24 Regular nodes (32 cores/64 threads with two AMD EPYC 7343 processors with 512 GB RAM) and 4 HighMem nodes (64 cores/128 threads with two AMD EPYC 7513 processors with 2048 GB RAM).We downloaded pangenome graphs for each autosome (24 in total) and for the mitochondrial DNA.Each graph represents 90 whole human haplotypes: 44 diploid individuals plus the GRCh38 (Schneider et al. 2017) and CHM13 (Nurk et al. 2021) haploid human references (see Supplementary Table S1 for graph statistics).When applied to these pangenome graphs using one Regular node for each calculation, odgi layout's 2D PG-SGD implementation obtains the graph layouts in 50 min on average, with the highest run time observed being chromosome 16 (Supplementary Table S1).This is expected since chromosome 16 has one of the highest levels of segmentally duplicated sequence among the human autosomes Algorithm 1: Pseudocode of PG-SGD in 1D.
Pangenome graph layout by path-guided stochastic gradient descent (Martin et al. 2004).Repetitive sequences lead to graph nodes with a very high number of path steps, which are computationally expensive to work with (Guarracino et al. 2022).Memory consumption is 29.66 GB of RAM on average, with the memory peak again occurring with chromosome 16, due to the path index building phase.Given its scalability, we applied 2D PG-SGD to the full graph with all chromosomes together using a HighMem node (Supplementary Table S1).To compare, BandageNG (https:// github.com/asl/BandageNG,last accessed July 2023), the current state of the art for graph visualization, was used to calculate a 2D layout of each of the HPRC pangenome graphs.For a fair comparison, we did not rely on BandageNG's interactive GUI application, but we executed BandageNG layout, which directly emits a 2D graph layout similar to odgi layout.BandageNG was not able to produce a layout for the full graph within 7 days, hitting the wall clock time limit of the cluster.On average, PG-SGD is �8× faster than BandageNG while using �2× less memory.
Pangenome graph layouts reveal biological features
Graph visualization is essential for understanding pangenome graphs and the genome variation they represent.We show how 2D PG-SGD allows us gaining insight into biological data by looking at the graph layout structure.In Fig. 2a, the chromosomes of the HPRC graph show the large-scale structural variations in the centromeres.Focusing on the major histocompatibility complex (MHC) of chromosome 6 (Fig. 2b), the 2D layout reveals the positions and diversity of all MHC genes (Fig. 2c).In Fig. 2d, the C4A and C4B genes are highlighted.Complementary, we provide various 1D visualizations in Supplementary Fig. S1.
Discussion
We presented PG-SGD, the first layout algorithm for pangenome graphs that leverages the biological information available within the genomes represented in the graph.Other generic graph layout algorithms, such as the one offered by BandageNG, ignore this additional information.Our implementation efficiently computes the layout of pangenome graphs representing thousands of whole genomes.
Graph visualization is key for understanding genome variations and the layouts produced by PG-SGD offer an unprecedented high-level perspective on pangenome variation.We implemented PG-SGD to generate layouts in 1D and 2D.These graph projections have already been employed in constructing and analyzing the first draft human pangenome reference (Liao et al. 2023), as well as in the discovery of heterologous recombination of human acrocentric chromosomes (Guarracino et al. 2023).Furthermore, they are applied in the creation and analysis of pangenome graphs for any species (Guarracino et al. 2022, Garrison et al. 2023).Of note, there still remains a gap in interactive and scalable solutions that merge layouts of large pangenome graphs with annotation.Our algorithm will underpin new pangenome graph browsers for studying graph layouts and the genome variation they represent (https://github.com/chfi/waragraph,last accessed July 2023).
The performance analysis shows that our 2D implementation outperforms BandageNG when handling large, complex pangenome graphs.While BandageNG was not able to deliver a layout of the whole HPRC graph within 1 week, our 2D PG-SGD calculated one within one day.There are some possible optimization approaches for future work to further improve the performance of PG-SGD, making it possible for interactive use.The data structure could be optimized to improve cache performance.Moreover, the highdegree of parallelism could be further exploited by using a GPU.In BandageNG, one cannot select the number of threads for the calculations.They are automatically chosen by the number of connected components of the graph to draw.This limits its parallelism and leads to an unbalanced workload.Since BandageNG was primarily designed for assembly graphs, one may have to adjust its parameters dependent on the input graph, in order to boost the layout generation or to adjust the highlighting of desired graph features.
The classical force model of state of the art generic graph algorithms, such as FM 3 -based ones, places nodes according to their attractive and repulsive forces.This force can be seen as equivalent to how our 2D PG-SGD moves the nodes' ends in 2D: If the nucleotide distance of the randomly chosen path steps is smaller than the layout distance of the nodes' ends, we move them closer together ("attractive force"), else we move them further away ("repulsive force").However, the key difference here is that this approach is path-guided: paths represent biological sequences in pangenome graphs, so it is as if PG-SDG considers a "biological force" for placing the graph nodes.Theoretically, it would be possible to combine our approach with a force-directed one.Combining both methods, we might get the best of both worlds: multithreadable PG-SGD iteratively applied to different graph layout-levels.We can imagine that such an approach can lead to a further speedup when calculating the layout.However, for generic graphs, this would only work if path information for each node could be added: we would replace the classical physical simulation approach with our path-guided method.If such information is not available, one could randomly cover the graph with paths.This function is already provided in odgi cover.However, this is an NP-hard problem and our preliminary solutions proved ineffective.
With assembly graphs we face the same problem: they usually do not carry path information during each assembly step.One could map the initial assembly reads back against the assembly graph in order to build paths through the graph.This would allow us to obtain a layout using PG-SGD.
PG-SGD can be extended to any number of dimensions.It can be seen as a graph embedding algorithm that converts high-dimensional, sparse pangenome graphs into lowdimensional, dense, and continuous vector spaces, while preserving its biologically relevant information.This enables the application of machine learning algorithms that use the graph layout for variant detection and classification.Our future research involves leveraging these graph projections to detect structural variants and to identify and correct assembly errors.Moreover, we are considering extending the algorithm to RNA and protein sequences to support pantranscriptome graphs (Sibbesen et al. 2023) and panproteome graphs (Dabbaghie et al. 2023), respectively.
Figure 1 .
Figure 1.2D PG-SGD update operation sketches.(a) The path information of the graph.path1 and path2 both visit the same first node.Then their sequence diverges and they visit distinct nodes.(b-e) v i /v j or v i /v k is the current pair of nodes to update.ld ij /ld ik is the current layout distance.r; − r is the current size of the update.(b) Initial graph layout highlighting the future update of the two nodes of path1.(c) The graph layout after the first update.The nodes appear longer now, because we updated at the end of the nodes.Highlighted is the future update of the two nodes of path2.(d) The graph layout after the second update.Highlighted is the future update of the two nodes of path1.(e) Final graph layout after three updates using the 2D PG-SGD.
Figure 2 .
Figure 2. 2D visualizations of all chromosomes of the Human Pangenome Reference Consortium (HPRC) 90 haplotypes pangenome graph, chromosome 6, the major histocompatibility complex (MHC), and the complement component 4 (C4).(a) odgi draw layout of the HPRC pangenome graph 90 haplotypes.Displayed are all 24 autosomes and the mitochondrial chromosome.A red rectangle highlights chromosome 6 which is shown in the subfigure below.(b) gfaestus screenshot of the chromosome 6 layout.Colored in blue is the MHC.The hairball in the middle is the centromere.The black structures in the centromere are edges.(c) gfaestus screenshot of the MHC.All MHC genes are color annotated and the names of the genes appear as a text overlay.(d) gfaestus screenshot of the region around C4, specifically color highlighting genes C4A and C4B.The black lines are the edges of the graph. | 4,352.6 | 2024-07-01T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Optimization of Mixed Inulin, Fructooligosaccharides, and Galactooligosaccharides as Prebiotics for Stimulation of Probiotics Growth and Function
Prebiotics have become an important functional food because of their potential for modulating the gut microbiota and metabolic activities. However, different prebiotics can stimulate the growth of different probiotics. The optimization of prebiotics was focused on in this study in order to stimulate the representative probiotics’ growth (Lacticaseibacillus rhamnosus (previously Lactobacillus rhamnosus) and Bifidobacterium animalis subsp. lactis) and their function. The culture medium was supplemented with three prebiotics, including inulin (INU), fructooligosaccharides (FOS), and galactooligosaccharides (GOS). All prebiotics can clearly stimulate the growth of probiotic strains in both monoculture and co-culture. The specific growth rates of L. rhamnosus and B. animalis subsp. lactis were shown in GOS (0.019 h−1) and FOS (0.023 h−1), respectively. The prebiotic index (PI) scores of INU (1.03), FOS (0.86), and GOS (0.84) in co-culture at 48 h were significantly higher than the control (glucose). The mixture of prebiotics to achieve high quality was optimized using the Box–Behnken design. The optimum prebiotic ratios of INU, FOS, and GOS were 1.33, 2.00, and 2.67% w/v, respectively, with the highest stimulated growth of probiotic strains occurring with the highest PI score (1.03) and total short chain fatty acid concentration (85.55 µmol/mL). The suitable ratio of mixed prebiotics will function as a potential ingredient for functional foods or colonic foods.
Introduction
The microbiota in the gut tract is composed of trillions of microorganisms, including bacteria, viruses, and fungi, playing a crucial role in the digestive system functioning and overall health [1]. A healthy balance of microorganisms can help to prevent the overgrowth of harmful microorganisms, leading to infection and inflammation. An imbalance in the microbiota has been linked to a range of health problems, including autoimmune disorders, cardiovascular disease, certain types of cancer, obesity, and other metabolic disorders [2][3][4]. Presently, the study of the balance of gut microbiota relies on cuttingedge techniques, allowing researchers to better understand the composition, function, interactions of the gut microbiota with the host, and techniques for a specific person by the precision microbiota [5].
Prebiotics are defined as "a substrate that is selectively utilized by host microorganisms conferring a health benefit" [6]. Prebiotics are not hydrolyzed or absorbed in the upper part of the gastrointestinal tract (stomach and small intestine), and they should
Determination of the Growth Rate and the Specific Growth Rate
A sterilized MRS culture medium (10 g/L peptone, 8 g/L beef extract, 4 g/L yeast extract, 2 g/L ammonium citrate, 1 g/L polysorbate 80, 5 g/L sodium acetate, 0.1 g/L magnesium sulfate heptahydrate, 0.05 g/L manganese sulfate monohydrate, and 2 g/L potassium hydrogen phosphate) was supplemented with the prebiotic solutions (INU, FOS, and GOS) as a carbon source, which are denoted by the letters M-INU, M-FOS, and M-GOS, respectively. A positive control experiment was carried out using a culture medium that contained glucose (M-GLU), while MRS media without glucose and prebiotic solution were used as a negative control (M-MRS). The 10 6 CFU/mL of bacterial strains and co-culturing strains were then cultured in the different medium and incubated at 37 • C for 48 h. The plate count colony technique was used to determine the growth rate at 0, 6,12,18,24,36, and 48 h.
The specific growth rate period was referred to as the rise rate of biomass of a cell population per unit in biomass concentration. The bacterial cultures, L. rhamnosus and B. animalis subsp. lactis, were centrifuged at 9950× g for 5 min at 4 • C before being rinsed twice with PBS. The pellets were diluted by a 10-fold PBS buffer dilution. The dilution of each probiotic, 1 mL, was poured onto its own culture medium plate, which was then placed in an incubator at 37 • C for 24-48 h. The specific growth rate (µ) was calculated using the following Equation (1) [23]: where t 1 and t 2 were the log phase period of the bacteria growth, x 1 was the number of bacteria at time t 1 , and x 2 was the number of bacteria at time t 2 .
Determination of Organic Acid
The concentrations of lactic acid, acetic acid, propionic acid, and butyric acid were determined using high-performance liquid chromatography (HPLC) [20,[24][25][26]. The bacterial cultures and co-cultures were prepared by centrifuging at 9950× g for 5 min at 4 • C. The supernatant (500 µL) was thoroughly combined with 500 µL of 5 mM sulfuric acid (RCI Labscan, Bangkok, Thailand) in the tube. The solution was filtered through a 0.22-µm filter (CNW technologies, Shanghai, China) and kept in an amber glass vial tube for analysis. These organic acids were identified using a SUGAR column (6 µm, 8 × 300 mm, SH1011, Shodex, Munich, Germany) with an HPLC system (model LC-20AD, Shimadzu, Kyoto, Japan). The analytical column was placed at a constant temperature of 75 • C. The mobile phase, 5 mM sulfuric acid, was passed through a filter (CNW technologies, Shanghai, China) and degassed for 30 min in an ultrasonic bath (Trassonic Digital S, Elma, Singen, Germany) before the operation. The flow rate of the mobile phase was 0.6 mL/min in the gradient program. The organic acids were detected by an ultraviolet detector at 220 nm. The standard substances of lactic acid, acetic acid, propionic acid, and butyric acid were obtained from LOBA Chemie (Mumbai, India), RCI Labscan (Bangkok, Thailand), Ajax Finechem Pty (Seven Hills, New South Wales, Australia), and PanReac AppliChem (Darmstadt, South Hesse, Germany).
Determination of Prebiotic Index (PI)
The prebiotic index was determined by co-culture between probiotics (L. rhamnosus and B. animalis subsp. lactis) and pathogenic bacteria (E. coli and S. Typhi) (10 6 CFU/mL) in an MRS medium supplemented with glucose (M-GLU) and prebiotics (M-INU, M-FOS, and M-GOS). The co-culture was anaerobically incubated at 37 • C for 48 h. The colonies were then cultured on selective media: MRS agar supplemented with bromocresol purple (Fisher Scientific, Loughborough, UK) under anaerobic conditions at 37 • C for L. rhamnosus and Bifidus selective media (BSM) agar (Fluka, Sigma-Aldrich, St. Louis, MO, USA) under strictly anaerobic conditions at 37 • C for B. animalis supsp. Lactis. The bacterial colonies were counted and the index score was calculated according to the following Equation (2): where PI value was calculated by comparing the increase in the growth of representative probiotic bacteria (Lac and Bif) to the growth of representative gut bacterial pathogens (Eco and Sal) in the presence of oligosaccharides. Lac is the log number (CFU/mL) of L. rhamnosus at sampling times divided by log number (CFU/mL) at baseline (time 0), Bif is the log number (CFU/g) of B. animalis subsp. lactis at sampling times divided by log number (CFU/mL) at baseline (time 0), Eco is log number (CFU/mL) of E. coli at sampling times divided by log number (CFU/g) at baseline (time 0), Sal is log number (CFU/mL) of S. Typhi at sampling times divided by log number (CFU/mL) at baseline (time 0), and Total is log number (CFU/mL) of total bacteria at sampling times divided by log number (CFU/mL) at baseline (time 0) [27].
Optimization of Prebiotics Ratio by the Experimental Design
The ratio percentage of prebiotics was varied in MRS medium as a carbon source. Experiments were performed with three variables, including INU (X 1 ), FOS (X 2 ), and GOS (X 3 ). The variables with code levels of −1, 0, and 1 had low, medium, and high prebiotic content, respectively. The range of prebiotic content was 1.33-2.67% w/v [21]. A Box-Behnken design (BBD) was utilized for the optimization of the prebiotic ratio using Design Expert software (version 10, Stat-Ease Inc., Minneapolis, MN, USA), leading to a total of 17 runs (Table 1). An analysis of variance (ANOVA) was used to obtain the results, which were based on the p-value at the 95% confidence level. PI score and total short chain fatty acids (SCFAs) (acetic acid, propionic acid, and butyric acids) were assessed as responses to the experimental design.
Fourier Transform Infrared Spectroscopy (FTIR) Analysis
The phytochemical structure of the optimal prebiotic ratio was analyzed by using FTIR with an IR microscope (NICOLET 6700 FT-IR, Thermo Science Waltham, MA, USA) at a spectra range of 4000 to 400 cm −1 using KBr pellets, and the resolution was 2 cm −1 [28][29][30].
Statistical Analysis
All results of each experiment were determined in triplicate and expressed as mean values with standard deviations (SD). ANOVA and post hoc Turkey HSD multiple comparisons among means were performed using statistical SPSS software (version 17, SPSS Inc., Chicago, IL, USA) to analyze the significant differences in the growth of probiotics, PI scores, and organic acids between the different prebiotics. p < 0.05 was considered statistically significant.
Kinetics of Monoculture Bacterial Growth on Different Media Supplemented Prebiotics
Plate dilution counts were used to determine growth parameters, which were expressed as log CFU/mL. As shown in Figure 1, the trend of growth kinetics had decreased for all bacteria in M-MRS, whereas the trend had increased for all bacteria in M-GLU. At the start of the experiment (0 h), the growth kinetics of L. rhamnosus, B. animalis subsp. lactis, E. coli, and S. Typhi in culture media without carbon source and prebiotics (M-MRS) were, consecutively, 6.43 ± 0.02, 6.36 ± 0.04, 5.92 ± 0.04, and 5.73 ± 0.04 log CFU/mL. After incubation in M-MRS for 48 h, L. rhamnosus, B. animalis subsp. lactis, E. coli, and S. Typhi were decreased to 5.24 ± 0.02, 4.30 ± 0.04, 4.82 ± 0.11, and 2.82 ± 0.12 log CFU/mL, respectively. Considering probiotic bacteria culturing in M-GLU, the growth of L. rhamnosus and B. animalis subsp. lactis was 6.40 ± 0.06 and 6.37 ± 0.07 log CFU/mL, respectively, at the beginning of incubation. Following that, greater levels of growth were observed 24 h after incubation: 10.46 ± 0.05 log CFU/mL for L. rhamnosus and 9.45 ± 0.02 log CFU/mL for B. animalis subsp. lactis. For pathogen bacteria, E. coli and S. Typhi were cultured in M-GLU at a range of 4.84 ± 0.04 to 7.78 ± 0.10 and 4.85 ± 0.16 to 7.76 ± 0.09 log CFU/mL, respectively, of which the lowest level was at 6 h and the highest level was at 48 h. CFU/mL during 48 h of incubation. The counts of L. rhamnosus ranged from 6.31 ± h) to 8.82 ± 0.03 log CFU/mL (18 h) in M-FOS and 6.73 ± 0.05 (0 h) to 10.34 ± 0.04 CF (36 h) in M-GOS. After the initial culture period, B. animalis subsp. lactis in M-INU a GOS reached maximums of 9.74 ± 0.04 and 9.54 ± 0.15 log CFU/mL, respectively, of incubation time. However, B. animalis subsp. lactis in M-FOS exhibited the highes ber at 18 h of incubation (9.72 ± 0.11 log CFU/mL) and the lowest number at 0 h of i tion (6.39 ± 0.05 log CFU/mL). ± 0.09, 3.81 ± 0.19, and 4.52 ± 0.14 log CFU/mL, respectively, at the end of the experiment. CFU/mL in M-GOS. The growth numbers in M-INU, M-FOS, and M-GOS continuously decreased to 3.14 ± 0.09, 3.81 ± 0.19, and 4.52 ± 0.14 log CFU/mL, respectively, at the end of the experiment.
Kinetics of Co-Culture Bacterial Growth on Different Medium Supplemented Prebiotics
The numbers of probiotics, L. rhamnosus ( Figure 3a) and B. animalis subsp. lactis (Figure 3b), as well as total bacteria ( Figure 3e) in M-GLU co-culture, showed an upward kinetic trend ranging from 5.17 ± 0.13 to 8.32 ± 0.02, 5.56 ± 0.12 to 8.49 ± 0.07, and 5.19 ± 0.02 to 10.57 ± 0.04 log CFU/mL, respectively. However, in the case of pathogens (E. coli and S. Typhi), the kinetic trend of co-culture bacteria in M-GLU decreased. The starting counts of E. coli (5.76 ± 0.10 log CFU/mL) and S. Typhi (5.87 ± 0.04 log CFU/mL) gradually decreased to 3.67 ± 0.12 and 3.93 ± 0.06 log CFU/mL at the end of the incubation period (48 h).
The growth trend of probiotics increased in the medium supplemented with all three prebiotics, with the highest counts at 48 h of incubation. The M-FOS showed the highest stimulation of the growth number of L. rhamnosus, from 5.63 ± 0.15 to 8.39 ± 0.07 log CFU/mL, while the M-GOS and M-INU stimulated growth numbers, from 5.66 ± 0.02 and 5.58 ± 0.13 to 8.23 ± 0.10 and 7.79 ± 0.02 log CFU/mL, respectively, as shown in Figure 3a.
Kinetics of Co-Culture Bacterial Growth on Different Medium Supplemented Prebiotics
The numbers of probiotics, L. rhamnosus ( Figure 3a) and B. animalis subsp. lactis (Figure 3b), as well as total bacteria ( Figure 3e) in M-GLU co-culture, showed an upward kinetic trend ranging from 5.17 ± 0.13 to 8.32 ± 0.02, 5.56 ± 0.12 to 8.49 ± 0.07, and 5.19 ± 0.02 to 10.57 ± 0.04 log CFU/mL, respectively. However, in the case of pathogens (E. coli and S. Typhi), the kinetic trend of co-culture bacteria in M-GLU decreased. The starting counts of E. coli (5.76 ± 0.10 log CFU/mL) and S. Typhi (5.87 ± 0.04 log CFU/mL) gradually decreased to 3.67 ± 0.12 and 3.93 ± 0.06 log CFU/mL at the end of the incubation period (48 h).
The growth trend of probiotics increased in the medium supplemented with all three prebiotics, with the highest counts at 48 h of incubation. The M-FOS showed the highest stimulation of the growth number of L. rhamnosus, from 5.63 ± 0.15 to 8.39 ± 0.07 log CFU/mL, while the M-GOS and M-INU stimulated growth numbers, from 5.66 ± 0.02 and 5.58 ± 0.13 to 8.23 ± 0.10 and 7.79 ± 0.02 log CFU/mL, respectively, as shown in Figure 3a. The M-INU showed the highest count of B. animalis subsp. lactis from 5.58 ± 0.12 to 8.87 ± 0.12 log CFU/mL, while the M-FOS and M-GOS stimulated growth numbers from 5.71 ± 0.04 and 5.67 ± 0.07 to 8.28 ± 0.05 and 7.33 ± 0.09 log CFU/mL, respectively ( Figure 3b). Similarly, the number of total bacteria had increased in M-INU, MFOS, and M-GOS ( Figure 3c) and had reached its highest level at 48 h of incubation: 9.40 ± 0.03; 9.67 ± 0.02; and 9.45 ± 0.08 log CFU/mL, respectively.
Optimization of Prebiotic Ratio in Culture Medium
The combination of prebiotics to achieve high quality was optimized using RSM, using the PI score and total SCFA contents as responses. Table 2 lists the actual and predicted outcomes of 17 experimental runs according to the BBD. The actual results varied between 0.05 and 1.03 for the PI score and 49.89 and 85.55 µmol/mL for the total SCFA content. The predicted data ranged from 0.02 to 0.95 in the PI score and 48.50 to 85.15 µmol/mL in the total SCFA content. Tables 3 and 4 display the results of an analysis of variance (ANOVA) using a 2FI (interaction between two factors) model for the PI score and a quadratic model for the total SCFA contents. The outliers were excluded from the data analysis. The response models were found to be extremely significant, with p-values less than 0.0001. The statistical significance properties of the model terms were evaluated using respective p-values (p < 0.05). The p-value (0.5666 of the PI score and 0.8460 of the total SCFA contents) for "lack of fit" was insignificant relative to the error. According to the fit statistics of the PI score, the determined coefficient (R 2 ), adjusted R 2 , and predicted R 2 were, respectively, 0.9952, 0.9903, and 0.9638, while R 2 , adjusted R 2 , and predicted R 2 of the total SCFA contents were 0.9763, 0.9458, and 0.9057, respectively (Table 5). These data indicated that the model equations were adequate for predicting responses under a combination of variable factors. The regression equation of the predicted responses of the PI score and total SCFA contents was expressed in the 2FI equation and quadratic equation as shown below.
where Y 1 is the PI score, Y 2 is the total SCFAs concentration (µmol/mL), and A, B, and C are the code factors of −1, 0, and 1, respectively.
Optimization of Prebiotic Ratio in Culture Medium
The combination of prebiotics to achieve high quality was optimized using RSM, using the PI score and total SCFA contents as responses. Table 2 lists the actual and predicted outcomes of 17 experimental runs according to the BBD. The actual results varied between 0.05 and 1.03 for the PI score and 49.89 and 85.55 µmol/mL for the total SCFA content. The predicted data ranged from 0.02 to 0.95 in the PI score and 48.50 to 85.15 µmol/mL in the total SCFA content. Table 3 and Table 4 display the results of an analysis of variance (ANOVA) using a 2FI (interaction between two factors) model for the PI score and a quadratic model for the total SCFA contents. The outliers were excluded from the data analysis. The response models were found to be extremely significant, with p-values less than 0.0001. The statistical significance properties of the model terms were evaluated using respective p-values (p < 0.05). The p-value (0.5666 of the PI score and 0.8460 of the total SCFA contents) for "lack of fit" was insignificant relative to the error. According to the fit statistics of the PI score, the determined coefficient (R 2 ), adjusted R 2 , and predicted R 2 were, respectively, 0.9952, 0.9903, and 0.9638, while R 2 , adjusted R 2 , and predicted R 2 of the total SCFA contents were 0.9763, 0.9458, and 0.9057, respectively (Table 5 where Y1 is the PI score, Y2 is the total SCFAs concentration (µmol/mL), and A, B, and C are the code factors of −1, 0, and 1, respectively. The interactions of all factors and their effects on the PI score and SCFA content were shown as a response surface plot, with red representing the greatest value and blue representing the minimum value (Figures 6 and 7). The run number 16, with 1.33% w/v of inulin, 2.00% w/v of FOS, and 2.67% w/v of GOS, showed the maximum growth stimulation of probiotics with 1.03 ± 0.07 of the PI scores, and had the highest total SCFA content (85.55 ± 12.49 µmol/mL).
Phytochemical Structure Using FTIR
The FTIR pattern of the culture medium with prebiotics (according to the ratio at run number 16) was compared between before and after probiotic culturing, as shown in Figure 8. The FTIR spectra of the culture medium with prebiotics before incubation showed high bands at 1000 to 800 cm −1 and broad bands between 3600 and 3000 cm −1 , while after incubation the high bands were at 1640 cm −1 and the broad bands were noticed between 3600 and 3000 cm −1 .
Phytochemical Structure Using FTIR
The FTIR pattern of the culture medium with prebiotics (according to the ratio at run number 16) was compared between before and after probiotic culturing, as shown in Figure 8. The FTIR spectra of the culture medium with prebiotics before incubation showed high bands at 1000 to 800 cm −1 and broad bands between 3600 and 3000 cm −1 , while after incubation the high bands were at 1640 cm −1 and the broad bands were noticed between 3600 and 3000 cm −1 .
The FTIR pattern of the culture medium with prebiotics (according to the ratio at run number 16) was compared between before and after probiotic culturing, as shown in Figure 8. The FTIR spectra of the culture medium with prebiotics before incubation showed high bands at 1000 to 800 cm −1 and broad bands between 3600 and 3000 cm −1 , while after incubation the high bands were at 1640 cm −1 and the broad bands were noticed between 3600 and 3000 cm −1 .
Discussion
More researchers in recent years have been combining methods from several fields in an effort to understand the complex interactions that form between dietary components and health impacts. Currently, many techniques are being applied to the research of prebiotics, probiotics, and synbiotics to resolve issues related to safety, quality, function, and nutrition [32]. Because the efficiency of prebiotics might be unique to some probiotic species and strains, different probiotic species and strains exhibit totally distinct impacts, making it difficult to investigate a specific product. In this study, the optimization of the L. rhamnosus and B. animalis subsp. lactis, which are mainly abundant in the small intestine and colon, respectively, were employed as representative probiotic strains. According to a statement from the Thai Ministry of Public Health, these strains were included on the list of probiotics approved for use in foodstuffs. Moreover, L. rhamnosus appears to be the main homofermentative Lactobacillus species inhabiting the human gastrointestinal system. Many clinical studies have focused on selected strains: the effect of L. rhamnosus GG on energy metabolism and gut microbiota in obese mice [33] and the effect of B. animalis subsp. lactis BB-12 on improving the human gut microbiota [34]. In the case of monoculture (Figure 1), all culture mediums could promote the growth of L. rhamnosus and B. animalis subsp. lactis, but not of both pathogenic bacteria, E. coli and S. Typhi. In the same way, Figueroa-Gonzalez et al. [15] studied the ability of five probiotics (including L. casei Shirota, L. casei 1, L. casei 2, L. rhamnosus GG, and L. rhamnosus) to estimate the growth of behavior in three different prebiotics (INU, GOS, and lactulose). According to the result, all tested probiotics were capable of growing on medium supplemented with the studied prebiotics; however, the growth of the selected probiotics at each incubation time occurred in a different manner. Interestingly, all probiotics, except L. casei Shirota, in INU and GOS had a higher final growth rate and final growth than that of the control (lactose). Related studies reported that the specification of the substrate and enzyme affected the growth rate. It might be that lactobacilli in the fermentation process produced a specific enzyme to digest the prebiotics as a carbohydrate substance, resulting in carbohydrate catabolism. Glycolysis is the main pathway, converting glucose to pyruvate while producing an amount of ATP. Depending on the microorganism involved, pyruvate is transformed into several end products, such as lactic acid, ethanol, or other organic substances. Fermentation is an inefficient method of producing energy; nevertheless, it allows microorganisms to grow, resulting in growth benefits [35,36]. Moreover, the structures influence carbohydrate digestion, and fermentability is one of the physicochemical characteristics of the various fibers.
Because they regulate the exposed surface area to bacterial degradation, fiber particle size and degree of solubility have a significant impact on the susceptibility of fibers to bacterial fermentation [37].
This research determined the specific growth rate because it represented an increase in the population during a certain time period. L. rhamnosus exhibited the maximum specific growth rate in M-GOS (p < 0.05), while B. animalis subsp. lactis showed the highest specific growth rate in M-FOS (p < 0.05) (Figure 2). Similarly, L. reuteri C1 and C6 demonstrated the best growth concerning basal MRS media containing GOS (p < 0.05) when compared with other carbon sources [23]. Some factors allowing lactic acid bacteria (LAB) to reach their maximal specific growth rate in GOS include their enzyme mechanism. One related enzyme allowing LAB to break down and utilize GOS is β-galactosidase, which is a common enzyme in many microorganisms, including Lactobacillus species [38]. This process of breaking the β-glycosidic link between galactose molecules in GOS releases free galactose that L. rhamnosus can utilize as a source of carbon and energy. In another study, among the tested carbohydrate sources, FOS was the most effective in enhancing the growth rate of Bifidobacterium Bf-1 and Bf-6 in skim milk [39]. Most bifidobacteria degrade and use FOS because they contain a competitive fructofuranosidase enzyme, which is abundantly produced by bifidobacteria in culture [38].
The diversity and interactions observed in their natural surroundings might not be fully represented in a monoculture, referring to a single-species culture of microorganisms. On the other hand, a co-culture, a culture containing multiple species of microorganisms, might more effectively represent the complexity of actual microbial communities. Under co-culture conditions, the culture medium with prebiotics could also stimulate the growth of both probiotics but not affect the growth of the two pathogenic bacteria, E. coli and S. Typhi (Figure 3). Buddington et al. [40] demonstrated that INU and FOS offered effective protection against the pathogens S. Typhimurium and Listeria monocytogenes in mice with abnormal crypt foci in the colon and in a cell line. Moreover, the Bifidobacterium, together with prebiotic transgalactosylated oligosaccharides (TOS), could be used for the antiinfective activity against Salmonella in a murine model [41].
To evaluate the prebiotic potential of different foods and ingredients, PI was employed as a measure to assist in selecting prebiotic-rich foods. PI value was calculated by comparing the increase in the growth of probiotic bacteria (if an increase in the populations is a positive effect) in the presence of the ingredient to the growth of less desirable ones (if an increase in the populations is a negative effect) [15,42,43]. [18] reported that INU, FOS, polydextrose, and isomaltooligosaccharides, both individually and in combination, had an impact on the PI value. According to the study, INU had a positive effect on the growth of beneficial bacteria at 24 h but a negative impact at 8 h. In another report, PI scores of 0.91 for INU, 0.56 for FOS, and 5.19 for GOS were found in a prior study that was conducted at pH 6 with a concentration of 2% w/v [26]. Prebiotics and mixed prebiotics have different chemical structures and functionalities that influence their fermentation by gut bacteria, leading to an impact on the PI. Some prebiotics, e.g., FOS, GOS, and XOS, are more quickly fermented by beneficial gut bacteria, resulting in a high prebiotic index score. Moreover, the degree of polymerization (DP) or the number of sugar units in a prebiotic molecule also affects its fermentability and prebiotic index score. Prebiotics with a low DP, such as FOS (DP of 2 to 8), GOS (DP of 2 to 8), and XOS (DP of 2 to 10), tend to be more fermentable and have a higher prebiotic index score than prebiotics with a high DP, such as INU (DP of 2 to 60) [44][45][46]. This is because the longer one contains fewer non-reducing ends per unit mass than the shorter one, resulting in less substrate for hydrolysis by bacterial enzymes. However, resistant starch and polydextrose are examples of prebiotics that may need prolonged fermentation periods or may require other bacterial species in order to reach the same prebiotic index score as other prebiotics [46]. Studies have reported that many genes supported healthy digestion by regulating prebiotic metabolism and the immune response to probiotics [15]. For example, genes encoding enzymes such as β-galactosidase and β-fructofuranosidase are involved in the breakdown of prebiotic GOS and FOS, respectively [47,48]. These factors could alter different PI values. Notably, the PI score value was simply one of several methods used to determine the prebiotic ability, and was not a guarantee that a product or diet was safe and efficient as a prebiotic. Therefore, it becomes necessary to consider other beneficial substances, such as organic acid.
Prebiotics are indigestible food ingredients that typically pass through the gastrointestinal tract and are fermented by gut bacteria, leading to the main production of SCFAs such as acetic acid, propionic acid, and butyric acid, as well as other beneficial substances such as lactic acid [49]. These SCFAs play important roles in maintaining gut balance and influencing the metabolic system. Additionally, lactic acid can assist in lowering the pH of the gastrointestinal tract and can create a more suitable environment for good bacteria. In the present study, the concentration of lactic acid in M-GLU was significantly higher (p < 0.05) than in all culture media with prebiotics (M-INU, M-FOS, and M-GOS). In contrast, the SCFAs concentration of all media containing prebiotics was significantly greater (p < 0.05) than that of M-GLU. It might be that GLU is a readily available source of energy and carbon that produces lactic acid through glycolysis. It constitutes a rapid and efficient way for microorganisms to produce the substance under anaerobic conditions. However, the production of SCFAs through the fermentation of prebiotics requires a different metabolic pathway that is typically less efficient than glycolysis. This can contribute to the relative abundance of lactic acid compared to SCFAs in a culture medium containing GLU. Furthermore, no significant difference (p > 0.05) was found in the organic acid concentrations of any prebiotic. The quantity of the organic acid could be in the following order: acetic acid > propionic acid > butyric acid, as shown in Figure 5. Similarly, a related study revealed that the total SCFAs concentration had a slight increase in the three-stage continuous culture system after treating with durum wheat dietary fiber (DWF) and enzyme-treated DWF. However, no differences were found between the two tested DWF regarding the percentage of SCFAs in their composition [50]. In fact, the SCFAs concentration was found in the following order: acetic acid > propionic acid > butyric acid [50]. Similarly, the SCFAs production in batch culture by G12, G19, G37-glucooligosaccharide, G12, G19, G37-maltodextrin, and INU were related to concentration in the order acetic acid > propionic acid > butyric acid [20]. SCFAs production in co-culture conditions might interrupt the growth of selected pathogenic bacteria. This is supported by the fact that Bifidobacterium or Lactobacillus generate SCFAs or other substances inhibiting pathogens and resulting in reduced intestinal pH [9]. These suggest that SCFAs may be a critical mechanism by which prebiotics promote health.
Overall, learning more about mixed prebiotics is important for many reasons. These include (i) advancing our knowledge of gut health: different prebiotics interact with one another and the gut microbiome, providing a better understanding of gut health and information about improving it; (ii) creating functional foods: combined prebiotics may be more helpful in terms of gut health and overall wellness; (iii) developing individual nutrition recommendations: this can assist individuals in gaining the maximum health advantages from the food by selecting the prebiotics that are most beneficial for their specific gut microbiome; and (iv) treating gut illnesses: researchers can develop more effective treatments for these conditions, such as irritable bowel syndrome (IBS) and inflammatory bowel disease (IBD). Prebiotics, either alone, in mixtures, or combined with probiotics in the form of synbiotics, can improve the gastrointestinal health of humans [51] Therefore, the present study optimized the ratio of different prebiotics, INU, FOS, and GOS, based on a related study in 4 to 8 g/day of the total formulation [21], by using BBD of response surface methodology with 3 levels and 3 factors ( Table 1). The response surface method has become one of the most used optimization approaches to create the best conditions with a minimum number of experiments [52]. In addition, BBD has been accepted as a good design for the optimization of the main variables. The result in this study found that run 16 with the ratio 1.33% w/v of inulin, 2.00% w/v of FOS, and 2.67% w/v of GOS, in the presence of a higher concentration of GOS, had significantly different PI scores (1.03 ± 0.007) and total SCFA concentration (85.55 ± 12.49 µmol/mL) ( Table 2). This is confirmed by the analysis of variance for the 2FI model of the PI score, which is significant (p < 0.0001) for the model and not significant (p = 0.5666) for the lack of fit (Table 3). In addition, the total SCFA concentrations were confirmed by the quadratic model, which was significant (p < 0.0001) for the model and not significant (p = 0.8460) for the lack of fit (Table 4). These indicate that the generated Equations (3) and (4) could be used to predict the optimal component for the PI score and total SCFA production. Thus, run 16 is an appropriate ratio of tested prebiotics for further study. The 3D response surface expressed the interaction between the effects of INU-GOS, and FOS-GOS on both responses, with the PI score and total SCFA production higher than the interaction of INU and FOS (Figures 6 and 7).
Similarly, the optimized combination of prebiotics in the related study was 1.26% w/v of FOS, 6.75% w/v of GOS, and 0.99% w/v of INU, where the higher concentration of GOS in the prebiotic mix was seen. Many reports have been conducted regarding the effectiveness of the mixed prebiotics. GOS as a prebiotic was proven to have an advantage over other substrates [51]. In another study, the amount of SCFA production was obtained as a total of acetic, butyric, and propionic acids. Only the probiotic produced significant quadratic effects on SCFA production, as did interactions between probiotics and FOS, and probiotics and maltodextrin. The response surface indicated that the production of SCFA from the fermentation of FOS was closely associated with the uptake of the substrate [53]. The ability of the lactobacilli and bifidobacteria to ferment specific oligosaccharides and polysaccharides can be important in the development of synbiotics [51].
The FTIR spectra were used to check the purity, identify a biomolecule, and indicate the presence of a functional group. The high-spectra band of mixture prebiotics before incubation was between 1000 and 800 cm −1 , which is the range of carbohydrates or oligoand polysaccharides. The IR spectra of carbohydrates can be divided into three specific spectral regions, including 1200 to 900 cm −1 , 3000 to 2700 cm −1 , and 900 to 600 cm −1 [54]. The spectral region between 1200 and 900 cm −1 is generally dominated by a complex sequence of intense peaks due mainly to strongly coupled C-C, C-O stretching, and C-O-H, C-O-C deformation modes of various oligo-and polysaccharides [54]. The FTIR spectra showed an increase over time in the polysaccharide-oligosaccharide region (1200 to 900 cm −1 ) for the conditions tested [55] In this study, the spectra band of the mixed prebiotics after incubation did not show a peak between 1000 and 800 cm −1 ; it may be that the structure of the polysaccharide was digested by the fermentation of probiotics ( Figure 8). The spectra band of the mixture of prebiotics after incubation was 1640 cm −1 . The previous studies reported the same result: the FTIR major peak after the fermentation process of yogurt was observed at 1640 cm −1 [56].
The different prebiotics can stimulate the growth of different probiotics and enhance their function. Inulin, one of the prebiotics, was commonly used as an ingredient in the functional food industry. However, purity INU, a long chain polysaccharide, tends to be costly and is extracted from chicory root, artichoke, and asparagus. On the other hand, FOS and GOS are short chain oligosaccharides that are derived from sources such as sugar (sugar found in milk for GOS) or cane. However, different sources may have slightly different properties, and probiotic specifications. Consequently, it is important to note that the optimization of a mixed prebiotic ratio may result in a potential functional food ingredient. Additionally, this knowledge of certain components influences the beneficial development of a dosage for our further clinical trials.
Conclusions
In this study, all prebiotics (INU, FOS, and GOS) had the potential to stimulate the growth of probiotics, L. rhamnosus and B. animalis subsp. lactis, and showed a high prebiotic index and SCFA concentrations when compared with of the control, but did not affect the growth of E. coli and S. Typhi. The optimal ratios of the three different prebiotics having a significant impact on the prebiotic index and SCFA production were 1.33% w/v of INU, 2.00% w/v of FOS, and 2.67% w/v of GOS (Run 16). In the future, suitable ratios of mixed prebiotics will be used as synbiotics for alternative food supplements in order to improve probiotic stimulation, or balance the gut microbiota in clinical trials with human volunteers. | 8,359.4 | 2023-04-01T00:00:00.000 | [
"Agricultural And Food Sciences",
"Biology"
] |
Microwave radiometer to retrieve temperature profiles from the surface to the stratopause
. TEMPERA (TEMPERature RAdiometer) is a new ground-based radiometer which measures in a frequency range from 51–57 GHz radiation emitted by the atmosphere. With this instrument it is possible to measure temperature profiles from ground to about 50 km. This is the first ground-based instrument with the capability to retrieve temperature profiles simultaneously for the troposphere and stratosphere. The measurement is done with a filterbank in combination with a digital fast Fourier transform spectrometer. A hot load and a noise diode are used as stable calibration sources. The optics consist of an off-axis parabolic mirror to collect the sky radiation. Due to the Zeeman effect on the emission lines used, the maximum height for the temperature retrieval is about 50 km. The effect is apparent in the measured spectra. The performance of TEMPERA is validated by comparison with nearby radiosonde and satellite data from the Microwave Limb Sounder on the Aura satellite. In this paper we present the design and measurement method of the instrument followed by a
Introduction
Temperature is a key parameter for dynamical, chemical and radiative processes in the atmosphere.There exist several techniques to measure atmospheric temperature profiles like radiosonde (e.g.Luers, 1997;Ruffieux and Joss, 2003), FTIR (Fourier transform infrared, e.g.Smith et al., 1999;Feltz et al., 2003), lidar (e.g.Evans et al., 1997;Alpers et al., 2004), GPS occultation (e.g.Wickert et al., 2001;Hajj et al., 2002) or satellite and ground-based microwave radiometers (for examples see below).The advantage of ground-based radiometry is the high time resolution at a fixed location that allows for observing local atmospheric dynamics over a long time period.Furthermore, in the near future there might be a lack of satellites which are measuring middle atmospheric profiles of trace gases and temperature.Therefore groundbased radiometry is important to continuously observe the atmosphere.
In the troposphere the atmospheric temperature is important for weather fore-and now-casting.Ground-based microwave radiometers for tropospheric temperature profiles are well established and exist in different configurations.Examples are MICCY (microwave radiometer for cloud cartography) (Crewell et al., 2001), RPG-HATPRO (Radiometer Physics GmbH-Humidity and Temperature Profiler) (Rose et al., 2005), Radiometrics MP-3000A (Ware et al., 2003) and ASMUWARA (All-Sky MUlti WAvelength RAdiometer) (Martin et al., 2006).
In the stratosphere temperature can influence chemical processes, and the vertical temperature distribution is important for atmospheric studies to investigate, for example, ozone or water vapor.The middle atmospheric temperature profile also can be affected by dynamical processes such as during sudden stratospheric warming (SSW, e.g.Scherhag, 1952;Flury et al., 2009;Scheiben et al., 2012) events when the temperature in the stratosphere can change by several tens of degrees within a very short time.Therefore it is necessary to obtain temperature profiles with a good temporal and spatial resolution.At present, data of stratospheric temperature profiles are mostly obtained by remote sensing methods using radiometers on satellites (e.g.MLS instrument on the Aura satellite as described in Waters et al. (2006), AMSU-A instrument on the Aqua satellite as described in Aumann et al. (2003) and SABER instrument on the TIMED satellite as described in Remsberg et al. (2003)).
The possibility of ground-based measurements of stratospheric thermal emission from high-rotational, magnetic dipole transitions of molecular oxygen around 53 GHz was first shown in Waters (1973).It is interesting to note that no realization of a ground-based stratospheric temperature radiometer was reported in the literature for several decades.A recent realization of such an instrument was described by Shvetsov et al. (2010).
In this paper we describe the construction, calibration and utilization of a new ground-based TEMPERature RAdiometer, called TEMPERA, that is able to monitor temperature structures from ground to the upper stratosphere.
In the following section of this paper we present the measurement method and the instrumental set-up of TEMPERA.In the third section we describe the temperature retrieval.In the fourth section a validation of the TEMPERA data and comparison with radiosonde and satellite data is presented.
Measurement method
TEMPERA measures thermal radiation from 51-57 GHz in the oxygen-emission region of the microwave spectrum.Oxygen is a well-mixed gas whose fractional concentration is independent of altitude below approx.80 km.Therefore the radiation contains information primarily on atmospheric temperature.
For tropospheric temperature profiles we measure with a filterbank at 12 frequencies from 51-57 GHz on the wing of the 60 GHz oxygen emission complex.In addition, with a digital fast Fourier transform (FFT) spectrometer, we get information on the temperature profile in the stratosphere by measuring two pressure-broadened emission lines centered at 52.5424 and 53.0669 GHz.
A ground-based microwave radiometer measures a superposition of emission and absorption of radiation at different altitudes.The intensity I can be described with the radiative transfer equation: I (ν, 0) = I 0 e −τ (ν,s 0 ) + s 0 0 B(ν, T (s))e −τ (ν,s) α(ν, s)ds, (1) where I (ν, 0) is the measured intensity at frequency ν from an observation position 0 at the earth surface, I 0 is the microwave background intensity, s 0 is the position of the upper boundary of the atmosphere, T (s) is the physical temperature and α(ν, s) is the frequency-dependent absorption Fig. 2. TEMPERA with its main components, the frontend and the parabolic mirror.TEMPERA is 1.1 m wide.The instrument is placed inside in a thermally controlled lab in front of a blue styrofoam window through which the atmosphere is observed.
coefficient both along the integration path s.B(ν, T ) is the Planck function: where h is the Planck's constant, k is the Boltzmann's constant and c is the speed of light.
The opacity or optical depth τ is defined as In microwave radiometry the measured intensity of radiation is often expressed as brightness temperature T B according to the Rayleigh-Jeans approximation (valid for the case hν kT ) of Planck's law: where λ is the wavelength.
The measured spectrum T B (ν, 0) is used to retrieve the temperature profile, as described in Sect.3.1.
Instrumental description
TEMPERA is a heterodyne receiver covering a frequency range of 51-57 GHz.Figures 1 and 2 give an overview of the instrument.Our instrument consists of three parts: the frontend to collect and detect the microwave radiation with two backends, a filterbank and a digital FFT spectrometer for the spectral analysis.The radiation is directed into the corrugated horn antenna using an off-axis parabolic mirror.The antenna beam has a half-power beamwidth (HPBW) of 4 • .The signal is then amplified and downconverted to an intermediate frequency for further spectral analysis.A noise diode in combination with an ambient hot load is used for calibration.The hot load has the room temperature of the laboratory (around 293 K).The receiver noise temperature T N is in a range from 475-665 K.An overview of the technical specifications is given in Table 1.
For the tropospheric measurements we use a filterbank (first backend) with 4 channels.However in the end we actually measure at 12 frequencies, which are listed in Table 2, by adjusting the local oscillator (LO) frequency with a synthesizer.For every measured zenith angle the LO frequency is changed three times.In this way we uniformly cover the range from 51-57 GHz at positions between the emission lines (see Fig. 3).The lower 9 channels have a bandwidth of 250 MHz and the channels 10-12 have a bandwidth of 1 GHz to enhance the sensitivity in the flat spectral region.
The second backend is used for stratospheric measurements and contains a digital FFT spectrometer (Acqiris AC240) for the two emission lines centered at 52.5424 and 53.0669 GHz.The input signal is coupled from the main signal before the filterbank.It passes an IQ-Mixer (Murk et al., 2009) before being fed to the spectrometer.When the LO frequency is changed by the synthesizer in the frontend, the second synthesizer, which is placed in front of the IQ-Mixer, also changes the frequency.With this method we can always measure the same range with the FFT spectrometer.Furthermore, this allows us to measure the tropospheric and stratospheric part at the same time.With the FFT spectrometer in combination with the IQ-Mixer, we can measure the two emission lines with a resolution of 30.5 kHz and a bandwidth of 960 MHz.The receiver noise temperature T N for the receiver-spectrometer combination is around 480 K.In Table 2 a list with all channels and the corresponding frequencies, bandwidths and the receiver noise temperatures is given.
An example of a measured spectrum is shown in Fig. 4. The zoomed lines show the influence of the Zeeman effect by the broadened line shape in the center with a kind of a plateau (round line shape around the line center: ±1 MHz).
TEMPERA operates from a temperature-stabilized laboratory at the ExWi Building of the University of Bern (Bern, Switzerland: 575 m above sea level; 46.95 • N,7.44 • E, view direction in azimuth: southeast (131.5 • )).A styrofoam window allows views of the atmosphere over the zenith angle (za) range from 30 • to 70 • .The operation of the instrument inside a laboratory has the advantage that the radiometer is protected against adverse weather conditions.The frontend itself has additional temperature stabilization with Peltier elements in combination with a ventilation system leading to a stabilization of the frontend plate within ±0.2 K.
Measurement cycle
Measurements are performed in periodic cycles with periods of 60 s.Each cycle starts with hot load calibration in combination with a noise diode (see also Sect.2.4) for 9 s followed by the atmosphere measurements.They consist of two parts: first a 15 s period at a zenith angle za = 30 • to observe with the FFT spectrometer and simultaneously with the filterbank, and second, a tipping curve in 3 s periods and angular steps in 5 • up to za = 70 • .After calibration, the output of each measurement cycle is a set of 108 brightness temperatures of the filterbank at 12 frequencies and at 9 zenith angles and a calibrated spectrum at the two emission lines from the FFT spectrometer consisting of 32 768 channels covering the bandwidth of 960 MHz.
For the retrieval we use a mean of 15 measurement cycles for the troposphere and 120 measurement cycles for the stratosphere, leading to a time resolution of 15 min for tropospheric profiles and 120 min for stratospheric profiles.
Calibration
Under the assumption of linearity between the antenna temperature T A and the detector-output voltage V T A , as well as the assumption of a perfect antenna that the brightness temperature T B is equal to T A , the following relation is valid: where g is the effective gain factor and T N is the receiver noise temperature.
The calibration parameters g and T N are obtained from the known radiation of an ambient hot load in combination with a noise diode.
To calibrate the noise diode we use a hot and a cold load.The cold load is a microwave absorber dipped in liquid nitrogen.Both loads are pyramidal microwave absorbers.With this calibration the so-called excess-noise diode temperature T ND is determined: where T H is the physical temperature of the hot load, T C is the physical temperature of the cold load (around 77 K at an altitude of 575 m, depending on pressure), V is the detectoroutput voltage, index H means hot load, HND means hot load with the noise diode switched on, and C means cold load.
The excess-noise diode temperature T ND is in a range from 40-75 K depending on the frequency.
With T ND we can finally calculate g and T N : The excess-noise diode temperature T ND is stable for more than 4 weeks ( T ND < 0.3 K for all frequencies).Therefore we repeat its calibration with liquid nitrogen every month.The whole FFT spectrum, with all channels from 52.4 to 53.2 GHz.The gap in the middle is due to effects of the filters.Lower panels: zoomed spectra around the first line at 52.5424 GHz and the second line at 53.0669 GHz.
Retrieval
Our temperature retrieval is based on the optimal estimation method (OEM) (Rodgers, 2000).The forward model F , as expressed by Eqs. ( 1) and (4), is used to simulate our measured brightness temperature.Here we express it in the following equation: where the vector y is the measured spectrum (brightness temperature), x is the true temperature profile, b contains some additional forward model parameters, and is the measurement noise.In our case F is nonlinear.To retrieve the temperature profile from the measured brightness temperature, Eq. ( 9) has to be inverted.This is an ill-posed problem and to obtain an "optimal" solution a statistical constraint is introduced.The solution can be defined as the zero of the gradient of the cost function J : where x a is the a priori temperature profile, S a is the a priori covariance matrix and S is the observation error-covariance matrix.This principle is based on Bayes' probability theorem.It is assumed that measurement uncertainties (S ) and a priori knowledge (S a ) both follow Gaussian statistics.On the condition that the forward model is not strongly nonlinear, the posterior distribution is then also Gaussian.The solution, x, is taken as the state with highest probability, which for Gaussian statistics is also the expected value of the distribution.
To find the zero of the derivative of the cost function J (Eq. 10) we use the Gauss-Newton iterative method, leading to where x i is the retrieved temperature profile at iteration i and K is the weighting function (K = ∂F /∂x).
The retrieval is calculated on a pressure grid.We use the same resolution for the retrieval grid as for the forward model grid.In this paper all data are plotted in geometric height in km for practical reasons.For the conversion from a pressure grid to a kilometer grid we used radiosonde and ECMWF (European Centre for Medium-Range Weather Forecasts) data.
In the OEM method an often used tool is the averaging kernel matrix A (Rodgers, 2000), which describes the response Table 3. List of used line parameters for O 2 .The data are from the PWR93 oxygen absorption model (Rosenkranz, 1993) The error statistics contain the total error (solid red) consisting of observation error (solid blue) and smoothing error (solid black).The total systematic error (dotted black) contains the uncertainties in the calibration (dashed blue), in the oxygen profile ("spectroscopic error", dashed red) and in the water vapor profile (dashed black).Water vapor 10 % Oxygen ("spectroscopic error") 1 % Calibration 1.5 K of the retrieved temperature profile x to a change in the "true" profile x: where K x = ∂F /∂x is the weighting function matrix and D y = ∂ x/∂y is the contribution function.
The rows of A are called the averaging kernels (AVK).Every row describes the sensitivity of the retrieval for a certain height level to a perturbation at other levels.The sum of the AVK is called the measurement response (MR), which describes the contribution of measurement to the retrieved profile at a certain height.The full-width at half-maximum (FWHM) of the AVK is often used as a height resolution of the retrieval.
There exist different methods to compare profiles from a ground-based radiometer with a reference profile from collocated radiosondes and satellites.One possibility is to interpolate the reference profiles at levels of the radiometer and compare it directly.A second method is to convolve the interpolated reference profile x r with the averaging kernel A of the radiometer to take into account the different height resolutions: where x a is the a priori profile of the radiometer.
Forward model parameters
In the radiative transfer calculations we use the model of Rosenkranz and the model of Liebe for the absorption coefficient calculations: Rosenkranz (1998) for H 2 O, Rosenkranz (1993) for O 2 and Liebe et al. (1993) for N 2 .The used line parameters for the oxygen (O 2 ) spectral lines are listed in Table 3.
In the forward model a water-vapor profile with an exponential decrease is included.This profile is calculated with the measured surface water vapor density from the ExWi-Weather station (placed next to TEMPERA) and assuming a scale height of 2000 m.Further descriptions can be found in Bleisch et al. (2011).For other species like oxygen (O 2 ) and with the measurements response (MR) and the full-width at half-maximum (FWHM, in km).
Error of the retrieval.The error statistics contain the total error (solid red) consisting of obse (solid blue) and smoothing error (solid black).The total systematic error (dotted black) uncertainties in the calibration (dashed blue), in the oxygen profile ("spectroscopic error" and in the water vapor profile (dashed black).
30 The error statistics contain the total error (solid red) consisting of observation error (solid blue) and smoothing error (solid black).The total systematic error (dotted black) contains the uncertainties in the calibration (dashed blue), in the oxygen profile ("spectroscopic error", dashed red) and in the water vapor profile (dashed black).
nitrogen (N 2 ) we used standard atmospheric profiles for summer and winter, which are incorporated into ARTS2 (middle latitude FASCODE (Fast Atmospheric Signature CODE) (Anderson et al., 1986)).
The presence of clouds has a relatively strong influence in the frequency range from 51 to 53 GHz, as can be seen in Fig. 5.In this figure the absorption coefficient for water vapor, liquid water, nitrogen and oxygen for 5 different frequencies between 51 and 57 GHz is plotted.The absorption coefficient of the nitrogen, water vapor and the liquid water is more or less the same for the 5 frequencies.This is not true for oxygen, which is strongly dependent on frequency.Furthermore, the figure shows that cloud liquid has a similar absorption coefficient as oxygen in the frequency range from 51 to 53 GHz.On the other hand, during clear sky (integrated liquid water, ILW = 0 mm) the main part of the absorption and emission in the atmosphere is from oxygen dominating the contribution from water vapor and nitrogen.Further discussion about temperature retrieval during weather conditions with and without clouds will be presented in Sect.4.2.
Weighting functions
Accurate and rapid calculations of weighting functions (WFs) are important for obtaining stable and fast inversions.
The general approach in ARTS to extract WFs is outlined by Buehler et al. (2005); some improvements have since been added.As part of this study, the analytical expressions used for temperature WFs were expanded to also consider local effects caused by the constraint of hydrostatic equilibrium.These resulting WFs cover all relevant aspects of the measurements of concern here, and the extension has drastically improved the calculation speed.See the user guide of ARTS2 (www.sat.ltu.se/arts/docs) for the expressions used and their limitations (e.g. they are not valid for limb sounding).
Error analysis
The total error of a retrieval consists mainly of the observation error due to the measurement noise and uncertainty as well as the smoothing error caused by the vertical smoothing of the retrieval method.The estimations of the observation covariance matrix (S o ) and the smoothing covariance matrix (S s ) are shown in the following equations (Rodgers, 2000): where I is the identity matrix.The total error of the observation and of the vertical smoothing is calculated as the square root of the diagonal elements of the covariance matrix S o and S s , respectively.Additionally there exist the systematic errors.We calculate the systematic errors by considering the estimated uncertainties in the water vapor profile, in the oxygen profile ("spectroscopic error") and in the calibration.This is done by a perturbation approach where we change the respective profiles and the calibration within the estimated limits and compare with the standard retrieval.The used estimated values are listed in Table 4.The total systematic error is calculated as the square root of the sum of the variances from water vapor, oxygen and calibration.
The retrieval software Qpack2 (Eriksson et al., 2005) uses a pressure grid for the retrieval calculations.We use the same resolution for the retrieval grid as for the forward model grid.Our first pressure-grid point is at the surface where our instrument is situated.This pressure value is taken from our weather station at the measurement time.The other grid points are selected in such a way that we have more grid points in the lower troposphere than in the upper troposphere because the measurements contain most information from the lowest layers of the troposphere.Our grid has a resolution of about 100 m from ground to 1 km, about 300 m from 1 to 5 km and about 500 m from 5 to 10 km.
As an a priori temperature profile, monthly mean radiosonde data from Payerne (46.82 • N, 6.95 • E; 491 m above sea level and 40 km W of Bern) are used.The data are from 1994 to 2011 and the soundings are twice a day at 11:00 and at 23:00 (UT).For the a priori covariance matrix S a we use a correlation function decreasing exponentially with a correlation length of 3 km.A standard deviation of 2 K at ground, decreasing linearly to 1.5 K at 15 km, is assumed.The observation error is considered in the covariance matrix S as a diagonal matrix.We use the standard deviation of around 15 measurements (1 measurement cycle: 1 min) at every zenith angle and frequency because every 15 min we retrieve a temperature profile (96 profiles per day).The observation error is in a range of 0.4 to 2 K, depending on frequency, zenith angle and weather conditions.In the forward model ±1 MHz around the two line centers we have no frequencies points in the grid because until now the Zeeman effect has not been incorporated into Qpack2/ARTS2.In the center of the lines (±16 MHz, 1000 channels) we use all channels and on the wings of the line we use a binning of 3 channels for data reduction.
Stratospheric retrieval
The stratospheric temperature profile is retrieved from the measurement of the two emission lines, centered at 52. Error of the retrieval.The error statistics contain the total error (solid red) consisting of obs (solid blue) and smoothing error (solid black).The total systematic error (dotted black) uncertainties in the calibration (dashed blue), in the oxygen profile ("spectroscopic error" and in the water vapor profile (dashed black).35 Fig. 15.Measurement of 15 June 2012 from 12:04-14:04 (UT) during clear sky (ILW = 0 mm).Upper panel: The averaging kernels (AVK, plotted every 7th) of TEMPERA are shown in this together with the measurements response (MR) and the full-width at half-maximum (FWHM, in km).Lower panel: error of the retrieval.The error statistics contain the total error (solid red) consisting of observation error (solid blue) and smoothing error (solid black).The total systematic error (dotted black) contains the uncertainties in the calibration (dashed blue), in the oxygen profile ("spectroscopic error", dashed red) and in the water vapor profile (dashed black).
seen.The results show that the intensity of the emission lines is highest at za = 30 • .Therefore we decided to measure the emission lines at this angle.
During a measurement cycle the integration time with the FFT spectrometer is 15 s.For a temperature profile we integrate the measurement for half an hour because then the noise level is low enough to get good retrievals.This requires two hours of measurement time because only one quarter of the measurement time is spent for the digital FFT spectrometer.The remaining period of the measurement TEMPERA measures at za = 35-70 • with the filterbank only (see also Sect. 2.3).For the forward model we use a vertical grid with a resolution of about 350 m.The retrieval grid has the same vertical resolution.Further in the forward model ±1 MHz around the two line centers we have no frequency points in the grid because the Zeeman effect is not yet fully incorporated into Qpack2/ARTS2.In the center of the lines (±16 MHz, 1000 channels) we use all channels, and on the wings of the line we use a binning of 3 channels for data reduction.
As a temperature a priori profile from ground to about 15 km we use the monthly mean of radiosonde data from Payerne from 1994 to 2011.Above, a climatology of the Microwave Limb Sounder (MLS) data (Aura satellite) is used.For the a priori covariance matrix S a we use a correlation function decreasing exponentially with a correlation length of 3 km.A standard deviation of 2 K is assumed.The observation error (residual) is considered in the covariance matrix S as a diagonal matrix.The residual is the difference between the integrated spectra and the fit of the spectra.Under regular conditions the observation error is in a range from 0.5 to 1.5 K. Around 52.5424 GHz (first line) the noise is higher than around at 53.0669 GHz (second line), as shown in Fig. 4.
We retrieve a temperature profile every 2 h, resulting in 12 profiles per day.To avoid effects due to clouds the stratospheric retrieval is done for conditions with ILW ≤ 0.1 mm.
Introduction
Here we present an analysis of temperature retrievals from TEMPERA over almost one year, during the period from 1 January to 13 December 2012.The data consist of 2929 stratospheric profiles and 33 021 tropospheric profiles.The whole data set, covering the height range from ground to 50 km, can be seen in Fig. 6.The merging of the two independent data sets was done in a way that we use the tropospheric data from ground to 14 km and the stratospheric data from 14 to 50 km.Because of the different time resolutions we had to adapt the stratospheric data to the time axis of the tropospheric data.There was an interpolation in time necessary.The white lines indicate the measurement response MR = 0.6.The altitude range with MR ≥ 0.6 is from ground to 5-6 km (troposphere) and from 18 to 48 km (stratosphere).The tropospheric data are for all weather conditions, and the stratospheric data are for conditions with ILW ≤ 0.1 mm to avoid effects due to clouds.The information about clouds (ILW) are from the radiometer TROWARA (TRopospheric WAter vapor RAdiometer) (Mätzler and Morland, 2009), which is installed next to TEMPERA, measuring the radiation from the sky in the same direction at 21, 22 and 31 GHz.
More details about the tropospheric and stratospheric temperature profiles will follow in the next sections.Furthermore, comparison between the data of TEMPERA and radiosonde and satellites data will be discussed.All correlation coefficients of the coincident profiles displayed in this paper have a confidence level above 95 %.
Clear sky measurements
Figure 7 shows a typical result of a temperature profile (upper plot) at times without liquid water compared with the profiles obtained by a nearby radiosonde site and the a priori profile.The corresponding brightness temperatures (lower plot) from 14 November 2011 at 11:00 (UT) are also shown.In this case we use all 12 channels and 9 zenith angles, with a total of 108 measured brightness temperatures.
At this time point an inversion between 1000 m and about 3000 m was present.Nevertheless the retrieved temperature profile from TEMPERA measurements agrees well with the radiosonde profile from Payerne and the weather stations from Bern and Zimmerwald (46.88 • N, 7.47 • E; 905 m above sea level and 10 km S of Bern).
The forward model brightness temperatures, calculated for the retrieved profile, agree well with the measured brightness temperatures for all channels.The absolute difference between measured and forward model brightness temperatures (residuals) is between 0.05 and 1.2 K.The averaging kernels, the height resolution (FWHM of the averaging kernels) and the measurement response are shown in Fig. 8.The height resolution in the first kilometer is about 300 m.From 1 to 10 km the height resolution increases to around 5 km.
The retrieval error, calculated with Eqs. ( 14) and ( 15), and the systematic errors are shown in Fig. 8.The retrieval error is less than 0.5 K from ground to 1 km and then increases linearly to 1.5 K at 10 km.It is also seen that the observation error is much smaller than the smoothing error.The total systematic error is between 0.5 and 1.5 K in the altitude range from ground to 10 km.
This example shows that during clear sky we get good results compared with the radiosondes and weather stations from ground up to 7 km with a measurement response higher than 0.6.From 7 to 10 km the measurement response is smaller than 0.6 and therefore more information from the a priori profile is used than from the measurements.
Measurements with cloudy sky
To retrieve temperature profiles during cloudy sky the lowest 4 channels between 51.25-52.85GHz are not taken into account in the calculations due to the unknown cloud influence shown in Fig. 5.We choose a threshold of ILW = 0.025 mm in order to use only 8 channels.Below this threshold value the retrieval takes into account all 12 channels.At the moment we do not take into account the liquid water (clouds) in the forward model.With this simple but effective method we get reasonable results.
A typical result of a temperature profile from 14 November 2011 at 23:00 (UT) with ILW = 0.1 mm is shown in Fig. 9.This measurement contains 8 frequencies (channels 5-12: 53.35-57 GHz) and 9 zenith angles providing 72 measured brightness temperatures (see also Fig. 9).The forward model brightness temperatures agree with the measured brightness temperatures, with an absolute difference between measured and forward model brightness temperatures (residuals) between 0.05 and 1.8 K. Also, the temperature profile from TEMPERA agrees with the radiosonde data from Payerne and the weather stations from Bern and Zimmerwald.The best agreements are in the altitude range from ground to about 2.5 km.
The averaging kernels, the height resolution (FWHM of the averaging kernels) and the measurement response are shown in Fig. 10.The height resolution is similar to resolution for clear sky.During cloudy sky the measurement response is higher than 0.6 up to about 6 km, i.e. slightly less than during clear sky, because the unused channels carry information about the upper troposphere.
The retrieval error, calculated with Eqs. ( 14) and ( 15), is shown in Fig. 10.The retrieval error is also similar to clear sky measurements.The total systematic error is between 0.5 and 0.9 K in the altitude range from ground to 10 km.
Comparison with radiosonde data over time
We compared the TEMPERA tropospheric temperature profiles with the radiosonde data from Payerne (40 km W of Bern) over the period from 1 January to 13 December 2012 during all weather conditions.Payerne is the closest radiosonde station to Bern.Balloons very often fly in direction to Bern, which makes the difference in location even smaller.The comparison consists of 644 profiles.The data are restricted to cases with near time-coincident sounding and retrieval profiles (two cases per day).Figure 11 shows a time series at 5 altitude levels.We observe an excellent agreement below altitudes of about 1.5 km with a CC (correlation coefficient) CC ≥ 0.97.From 1.5 km to about 5 km the results are still well correlated with CC≥0.86.From 5 to 10 km the TEMPERA data agree with the data from radiosonde with a CC between 0.77 and 0.86.The mean and the standard deviation of the difference T TEMPERA -T RS PAY are shown in Fig. 12 over the altitude range from ground to 10 km.This plot also shows that the best agreement is from ground to about 1.5 km.The mean difference is between −0.5 and +1 K over the whole altitude range.The standard deviation is around 1 K from ground to 1.5 km and then increases to nearly 3 K at 3 km and remains constant at this value until 10 km.A similar behavior of the mean value and standard deviation is seen in the comparison between TEM-PERA data and convolved data of radiosondes from Payerne (see also Fig. 12, lower panel).The agreements are better in these data.The correlation coefficient of all levels for the unconvolved and convolved data are shown in Fig. 13.
In Fig. 11 an interesting period in the data set is during the first half of February 2012 over an altitude range from ground to about 2.5 km.During this time in Switzerland there was a strong cooling from about 280 to 260 K at ground.A further interesting case was a warming during August 2012 with a temperature of almost 290 K at 2.5 km.Both effects were measured with TEMPERA and also with radiosonde data from Payerne.
Stratospheric temperatures profiles
A typical retrieved stratospheric temperature profile is seen in Fig. 14 for 15 June 2012 from 12:04 to 14:04 (UT).Over an altitude range from 20-45 km the profile agrees well with other measurements, i.e. satellite data from Aura/MLS and the radiosonde data from Payerne.The forward model brightness temperatures agree well with the measured brightness temperatures excepting around the line center.This is because of the Zeeman effect, which has not yet been incorporated into the forward model.The residuals (see lower panel of Fig. 14) are between −1.5 and 1.5 K.The averaging kernels, the height resolution (FWHM of the averaging kernels) and the measurement response are shown in Fig. 15.The height resolution is about 15 km.The measurement response higher than 0.6 is in an altitude range from 18 to 48 km.The retrieval error, calculated with Eqs. ( 14) and ( 15), is shown in Fig. 15.The total error is between 1.6 and 1.8 K in the altitude range of 18 to 48 km.Again, it is seen that the observation error is much smaller than the smoothing error.The total systematic error is between 0.1 and 0.35 K in the altitude range of 18 to 48 km.
Comparison with radiosonde and satellite data over time
We compared the TEMPERA stratospheric temperature profiles with the radiosonde data from Payerne and with MLS data over the period from 1 January to 13 December 2012 during weather conditions with ILW ≤ 0.1 mm.The comparison between TEMPERA data and radiosondes from 10 to 29 km consists of 524 profiles.The data are restricted to cases with near time-coincident sounding and retrieval profiles (two cases per day).Figure 16 shows the time series at 5 levels.The plot shows that, apart from some exceptions, the time evolution in temperature of both measurements is similar.The best agreement is found at altitudes between 25 and 29 km with a CC between 0.89 and 0.916.The mean and the standard deviation of the difference T TEMPERA -T RS PAY with unconvolved and convolved data are shown in Fig. 17.The mean difference is between −1 and 1 K, and the standard deviation is less than 3 K for an altitude range from 18 to 29 km.Again, the comparison is better with convolved data (see also Fig. 17, lower panel).The correlation coefficient over all levels with and without convolved data are plotted in Fig. 20 in the upper panel.Another comparison is obtained from MLS data with 243 profiles.The criterion for a collocation of a MLS profile with the measurement site is ±1 • (±110 km) in latitude and ±5 • (±460 km) in longitude.The plotted data are restricted to cases with near time-coincident MLS profiles and TEM-PERA profiles.Figure 18 shows the time series at 5 levels from 25 to 45 km with a CC between 0.87 and 0.92.Also, here we see that the time evolution in temperature is similar for both instruments.The mean of the difference T TEMPERA -T MLS (see Fig. 19) varies between 0 K and 1 K from 20 to 30 km, followed by a dip to −2 K at 35 km, then increases to around 2 K at 45 km.The standard deviation of this comparison is around 2.5 K from 20 to 35 km and around 4.5 K above 35 km.The agreement between the TEMPERA data and the MLS convolved data are better with a CC ≥ 0.95 over altitude levels from 18 to 48 km (MR ≥ 0.6).The correlation coefficient over all levels with and without convolved data are plotted in Fig. 20 in the lower panel.
Both comparisons show that TEMPERA performs well in the stratosphere.The results are best in the range from 25 to 40 km with CC ≥ 0.9 and with CC ≥ 0.95 for convolved data.
An interesting period was observed with TEMPERA and MLS in the beginning of January 2012 (Fig. 18).Both instruments measured a warming from about 260 to 280 K at an altitude of 45 km within some days.During the end of October and in the beginning of November there was a warming of more than 10 K at levels between 20 to 30 km.All three instruments (radiosonde, MLS and TEMPERA) measured these effects.This example shows that with TEMPERA it is possible to measure such interesting stratospheric events and to study local phenomena at mesoscale.Together with data from our other radiometers for ozone and water vapor we are now in an excellent position to study such interesting SSW events with high time resolution.
Conclusions
TEMPERA, the measurement instrument used in this study, is a new ground-based radiometer for tropospheric and stratospheric temperature profiles.For the troposphere a filterbank with 12 channels (51-57 GHz) is used and for the stratosphere two emission lines at 52.5424 and 53.0669 GHz are measured with a digital FFT spectrometer.This radiometer is the first instrument to measure temperature profiles from ground to about 50 km with a high sensitivity from ground to 6 km and from 18 to 48 km.TEMPERA also acquired information from 6 to 18 km but with reduced sensitivity, meaning we used more a priori information for the temperature profiles at these levels.
First validations of TEMPERA data with radiosonde and satellite data showed good agreements.The comparison with the radiosonde station in Payerne (40 km W of Bern) is not optimal because of lateral distance.A better comparison would be to measure with TEMPERA at Payerne, which is planned for future work.
The comparison of 644 profiles in the troposphere with radiosonde showed that the mean difference of the data is between −0.5 and 1 K with a CC (correlation coefficient) CC ≥ 0.93 (convolved data: CC ≥ 0.96) from ground to 2 km, a CC ≥ 0.86 (convolved data: CC ≥ 0.89) from 2 to 6 km and from 6 to 10 km the CC is between 0.77 and 0.86.
The comparison in the stratosphere with radiosonde (524 profiles) and satellite data (243 profiles) are also good.In the stratosphere the mean difference is between −2 and 2 K.The results are best in the range from 25 to 40 km with CC ≥ 0.9 and with CC ≥ 0.95 for convolved data.During cloudy sky there is a simple way to improve the tropospheric retrieval, which is to only use the higher frequencies above 53 GHz.For the stratospheric temperature retrieval we limited the retrievals to ILW ≤ 0.1 mm.
The upper height limit of the retrieval is at 50 km due to the Zeeman effect.This effect is also seen in the measurements with the digital FFT spectrometer as a line broadening near its center.
The data in this paper were produced with two independent retrievals for the troposphere and the stratosphere and then merged together in a color plot over an altitude range from ground to 50 km.The benefit of this method is that we were able to use the full time resolution of the two retrievals (troposphere: 15 min, stratosphere: 2 h).
In the future our goal is to combine the two data sets to a single retrieval in order to have one single profile from ground to 50 km.For future work we plan to investigate in more detail the retrieval under cloudy conditions (liquid clouds only).Furthermore, we aim to measure the Zeeman effect with a narrow-band software-defined radio (SDR) spectrometer.The effect will be incorporated into the forward model to improve the stratospheric temperature retrieval.
temperatures measured with TEMPERA on January 16, 2012 from 9h00-Upper panel: The whole FFT spectrum, with all channels from 52.4 to 53.2 GHz.The dle is due to effects of the filters.Lower panels: Zoomed spectra around the first line at and the second line at 53.0669 GHz.
Fig. 4 .
Fig. 4. Spectrum of brightness temperatures measured with TEM-PERA on 16 January 2012 from 09:00-13:00 (UT).Upper panel:The whole FFT spectrum, with all channels from 52.4 to 53.2 GHz.The gap in the middle is due to effects of the filters.Lower panels: zoomed spectra around the first line at 52.5424 GHz and the second line at 53.0669 GHz.
Fig. 6 .
Fig. 6.Temperature profiles derived from TEMPERA from ground to 50 km from 1 January to 13 December 2012.The white lines indicate the region where MR = 0.6.
Fig. 7 .Fig. 7 .
Fig. 7. Measurement of November 14, 2011 at 11h07-11h22 (UT) during clear sky (ILW=0 panel: Retrieved temperature profile (blue line).The a-priori profile is the dashed black line with radiosonde data from Payerne (red line) and with weather station data from Bern (black merwald (blue circle).Lower panel: Brightness temperatures of 12 channels measured wit (black) compared with the forward model brightness temperatures (red) corresponding to th 14, 2011 at 11h07-11h22 (UT) during clear sky (ILW=0 mm).Upper raging kernels (AVK, plotted every 4th) of TEMPERA are shown in this plot together with ent response (MR) and the full-width at half-maximum (FWHM, in km).Lower panel: Error l.The error statistics contain the total error (solid red) consisting of observation error (solid othing error (solid black).The total systematic error (dotted black) contains the uncertainties on (dashed blue), in the oxygen profile ("spectroscopic error", dashed red) and in the water dashed black).
Fig. 8 .
Fig.8.Measurements of 14 November 2011 at 11:07-11:22 (UT) during clear sky (ILW = 0 mm).Upper panel: the averaging kernels (AVK, plotted every 4th) of TEMPERA are shown in this plot together with the measurement response (MR) and the full-width at half-maximum (FWHM, in km).Lower panel: error of the retrieval.The error statistics contain the total error (solid red) consisting of observation error (solid blue) and smoothing error (solid black).The total systematic error (dotted black) contains the uncertainties in the calibration (dashed blue), in the oxygen profile ("spectroscopic error", dashed red) and in the water vapor profile (dashed black).
Fig. 9 .
Fig. 9. Measurement of November 14, 2011 at 22h57-23h12 (UT) during cloudy sky (IL Upper panel: Temperature profile (blue line) retrieved with TEMPERA.The a-priori profile black line.The retrieved profile is compared with radiosonde data from Payerne (red li weather station data from Bern (black circle), Zimmerwald (blue circle).Lower panel: Br peratures of 8 channels measured with TEMPERA (black) compared with the forward mod temperatures (red) corresponding to the retrieval.
Fig. 9 .
Fig. 9. Measurement of 14 November 2011 at 22:57-23:12 (UT) during cloudy sky (ILW = 0.1 mm).Upper panel: temperature profile (blue line) retrieved with TEMPERA.The a priori profile is the dashed black line.The retrieved profile is compared with radiosonde data from Payerne (red line), with weather station data from Bern (black circle), and with Zimmerwald (blue circle).Lower panel: brightness temperatures of 8 channels measured with TEM-PERA (black) compared with the forward model brightness temperatures (red) corresponding to the retrieval.
Fig. 10 .
Fig.10.Measurement of 14 November 2011 at 22:57-23:12 (UT) during cloudy sky (ILW = 0.1 mm).Upper panel: the averaging kernels (AVK, plotted every 4th) of TEMPERA are shown in this plot together with the measurements response (MR) and the full-width at half-maximum (FWHM, in km).Lower panel: error of the retrieval.The error statistics contain the total error (solid red) consisting of observation error (solid blue) and smoothing error (solid black).The total systematic error (dotted black) contains the uncertainties in the calibration (dashed blue), in the oxygen profile ("spectroscopic error", dashed red) and in the water vapor profile (dashed black).
Fig.Fig. 11 .
Fig. Time-series from January, 1 to December, 13 2012 (644 profiles) of tropospheric temperature profiles from TEMPERA (blue) compared with radiosonde data from Payerne (red, regridded to TEMPERA grid) for five different altitude levels.The black dashed line is the a-priori.
Fig. 12 .
Fig. 12.Comparison of 644 profiles (1 January to 13 December 2012) between TEMPERA and radiosonde data from Payerne over an altitude range from ground (0.575 km) to 10 km.Plotted are the mean (black line) and plus and minus one standard deviation around the mean (black dashed line) of the differences between TEMPERA and radiosonde data from Payerne.The horizontal dark grey line indicates the region where MR = 0.6.Upper panel: TEM-PERA compared with unconvolved radiosonde data from Payerne.Lower panel: TEMPERA compared with convolved radiosonde data from Payerne over an altitude region with MR ≥ 0.6.
Fig. 13 .
Fig. 13.Correlation coefficient of the comparison for 644 profiles (January, 1 to Decem between TEMPERA and radiosonde data from Payerne over an altitude range from groun to 10 km.The horizontal grey line indicates the region where MR=0.6.Correlation coe comparison between TEMPERA and the unconvolved (black line) and convolved (dashe altitude region with MR≥0.6) radiosonde data from Payerne.
Fig. 14 .
Fig.14.Measurement of 15 June 2012 from 12:04-14:04 (UT) during clear sky (ILW = 0 mm).Upper panel: temperature profile (blue line) retrieved with TEMPERA spectrometer measurements.The a priori profile is the dashed black line.The retrieved profile is compared with radiosonde data from Payerne (black line) and with MLS data (red).Lower panel: Brightness temperatures measured with TEMPERA (black) compared with the forward model brightness temperatures (red) that we received with the retrieval.Low in the panel the residuals are seen.In the forward model ±1 MHz around the two line centers we have no frequencies points in the grid because until now the Zeeman effect has not been incorporated into Qpack2/ARTS2.In the center of the lines (±16 MHz, 1000 channels) we use all channels and on the wings of the line we use a binning of 3 channels for data reduction.
Fig. 16 .Fig. 16 .
Fig. 16.Timeseries (524 profiles) of stratospheric temperature profiles from TEMPERA (blue) compared with radiosonde data from Payerne (red, regridded to TEMPERA grid) for five different altitude levels.The black dashed line is the a-priori.
Fig. 17 .
Fig. 17.Comparison of 524 profiles (1 January to 13 December 2012) between TEMPERA and radiosonde data from Payerne over an altitude range from 10 km to about 29 km.Plotted are the mean (black line) and plus and minus one standard deviation around the mean (black dashed line) of the difference between TEM-PERA and radiosonde data from Payerne.The horizontal dark grey line indicates the region where MR = 0.6.Upper panel: TEMPERA compared with unconvolved radiosonde data from Payerne.Lower panel: TEMPERA compared with convolved radiosonde data from Payerne over an altitude region with MR ≥ 0.6.
Fig. 18 .Fig. 18 .
Fig. 18.Timeseries (243 profiles) of stratospheric temperature profiles from TEMPERA (blue) compared with MLS data (red, regridded to TEMPERA grid) for five different altitude levels.The black dashed line is the a-priori.
Fig. 19 .
Fig. 19.Comparison of 243 profiles (1 January to 13 December 2012) between TEMPERA and MLS data over an altitude range from 15 km to about 55 km.Plotted are the mean (black line) and plus and minus one standard deviation around the mean (black dashed line) of the difference between TEMPERA and MLS data.The horizontal dark grey lines indicate the region where MR = 0.6.Upper panel: TEMPERA compared with unconvolved MLS data.Lower panel: TEMPERA compared with convolved MLS data over an altitude region with MR ≥ 0.6.
Fig. 20 .Fig. 20 .
Fig. 20.Correlation coefficient of the comparison between TEMPERA and radiosonde data (upper panel, 524 profiles) and MLS data (lower panel, 243 profiles) during the period from December, 13 2012.The horizontal grey lines indicate the region where MR=0.6.Correlati of the comparison between TEMPERA and the unconvolved (black line) and convolved data ( line, altitude region with MR≥0.6).
4.3.2Sudden stratospheric warming (SSW) over Bern during winter 2012/2013At the end of 2012 and in the beginning of 2013 a sudden stratospheric warming (SSW) occurred that was observed over Bern.During a SSW the temperature in the stratosphere increases by several tens of degrees within a very short time.This type of warming was first observed byScherhag (1952).SSW events over Bern were observed in 2008 and 2010.Flury et al. (2009) and Scheiben et al. (2012) reported the influence of these temperature increases on ozone and water vapor in the stratosphere, measuring with ground-based microwave radiometers and using temperature profiles from ECMWF or MLS to investigate the SSW.We measured the recent 2012 SSW event with the new ground-based radiometer TEMPERA.The time series from 21 December 2012 to 4 February 2013 (46 days) is shown in Fig. 21.In this plot we see a strong warming by about 30 K at an altitude of 40 km in the time period from 21 December 2012 to 25 December 2012.The warming stayed with temperatures between 250 to 290 K until 12 January 2013 over the altitude range of 40 to 50 km.The warmest region with 290 K was near 45 km.After this time period the temperature decreased to temperatures between 240 to 265 K.
Table 2 .
Specifications of the 12 tropospheric channels (ch1-ch12) and of the FFT spectrometer (ch_fft) with frequency f , the bandwidth B and receiver noise temperature .
Table 4 .
Estimated uncertainties for the calculations of the systematic retrieval errors. | 11,333 | 2013-09-25T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Glucose 6-P Dehydrogenase—An Antioxidant Enzyme with Regulatory Functions in Skeletal Muscle during Exercise
Hypomorphic Glucose 6-P dehydrogenase (G6PD) alleles, which cause G6PD deficiency, affect around one in twenty people worldwide. The high incidence of G6PD deficiency may reflect an evolutionary adaptation to the widespread prevalence of malaria, as G6PD-deficient red blood cells (RBCs) are hostile to the malaria parasites that infect humans. Although medical interest in this enzyme deficiency has been mainly focused on RBCs, more recent evidence suggests that there are broader implications for G6PD deficiency in health, including in skeletal muscle diseases. G6PD catalyzes the rate-limiting step in the pentose phosphate pathway (PPP), which provides the precursors of nucleotide synthesis for DNA replication as well as reduced nicotinamide adenine dinucleotide phosphate (NADPH). NADPH is involved in the detoxification of cellular reactive oxygen species (ROS) and de novo lipid synthesis. An association between increased PPP activity and the stimulation of cell growth has been reported in different tissues including the skeletal muscle, liver, and kidney. PPP activity is increased in skeletal muscle during embryogenesis, denervation, ischemia, mechanical overload, the injection of myonecrotic agents, and physical exercise. In fact, the highest relative increase in the activity of skeletal muscle enzymes after one bout of exhaustive exercise is that of G6PD, suggesting that the activation of the PPP occurs in skeletal muscle to provide substrates for muscle repair. The age-associated loss in muscle mass and strength leads to a decrease in G6PD activity and protein content in skeletal muscle. G6PD overexpression in Drosophila Melanogaster and mice protects against metabolic stress, oxidative damage, and age-associated functional decline, and results in an extended median lifespan. This review discusses whether the well-known positive effects of exercise training in skeletal muscle are mediated through an increase in G6PD.
The Pentose Phosphate Pathway and the Regulation of G6PD
Glucose is catabolized by two pathways: glycolysis, to generate ATP, and the pentose phosphate pathway (PPP), also known as the hexose monophosphate shunt, to generate reduced nicotinamide adenine dinucleotide phosphate (NADPH) and ribose 5-phosphate (R5P) for nucleotide synthesis.
The PPP was fully elucidated in the 1950s owing to the joint work of several researchers including Efraim Racker [1], Bernard Horecker [2], and Frank Dickens [3]. However, in the 1930s, Otto Warburg was the first to provide evidence of the existence of the PPP when he studied the oxidation of glucose 6-phosphate (G6P) to 6-phosphogluconate (6PG) and discovered NADP + [4].
The PPP competes with glycolysis for the catabolism of glucose 6-P (G6P). In the oxidative PPP branch, G6P is converted to ribulose-5P with the loss of CO 2 and the formation of two NADPH molecules. Glucose 6-P dehydrogenase (G6PD) catalyzes the first committed step of this PPP branch, which involves the conversion of G6P to 6PG and the generation of the first NADPH molecule. This irreversible reaction is unique to the PPP and has a primary role in the regulation of this pathway [6]. NADPH can be also synthesized by other enzymes such as the NADPH-malic enzyme, NADPH-dependent isocitrate dehydrogenase, and transhydrogenases [7].
Across all forms of life, NADPH donates high-energy electrons for reductive biosynthesis and antioxidant defense [8]. One of the main functions of NADPH in our cells is in the maintenance of redox homeostasis [9]. NADPH is the electron donor for the antioxidant enzymes glutathione reductase (GR) and thioredoxin reductases (TrxR). Reduced glutathione (GSH) and reduced thioredoxin (Trx(SH) 2 ) provide reducing equivalents for glutathione peroxidase (GPx), glutaredoxins (Grx), and peroxiredoxins (Prx). Thus, NADPH is located at the core of the antioxidant defense [10]. Another function of the pyridine nucleotide NADPH is to boost biosynthetic reactions in our cells. It provides the reducing power for fatty acids and cholesterol synthesis [9]. Finally, NADPH acts as the coenzyme of NADPH-oxidase enzymes (NOXs), which-through the generation of the superoxide radical-are involved in the oxidative burst and its defensive functions in our immune cells (granulocytes and macrophages) [11], and even in other cellular types [12,13]. Nitric oxide synthases (NOS), dihydrofolate reductase (DHFR), and cytochrome P450 oxidoreductase are also NADPH-dependent enzymes ( Figure 2). glutathione (GSH) and reduced thioredoxin (Trx(SH)2) provide reducing equivalents for glutathione peroxidase (GPx), glutaredoxins (Grx), and peroxiredoxins (Prx). Thus, NADPH is located at the core of the antioxidant defense [10]. Another function of the pyridine nucleotide NADPH is to boost biosynthetic reactions in our cells. It provides the reducing power for fatty acids and cholesterol synthesis [9]. Finally, NADPH acts as the coenzyme of NADPH-oxidase enzymes (NOXs), which-through the generation of the superoxide radical-are involved in the oxidative burst and its defensive functions in our immune cells (granulocytes and macrophages) [11], and even in other cellular types [12,13]. Nitric oxide synthases (NOS), dihydrofolate reductase (DHFR), and cytochrome P450 oxidoreductase are also NADPH-dependent enzymes ( Figure 2). The non-oxidative PPP is a flexible pathway that is able to adapt to varying cellular needs through the generation of different phosphorylated carbohydrates with three, four, five, or seven carbons [14]. This branch begins with a bifurcation: the ribulose-5P obtained from the oxidative PPP after epimerization is transformed into xylulose 5-phosphate or can isomerize and form ribose-5-phosphate, which can be used for nucleotide synthesis. The main modes of the PPP depending on cellular needs are summarized in Figure 3 [15]. Although the corresponding stoichiometric reaction for each mode is shown, the carbon flux is difficult to quantify in cells [15], and a situation in which all the flux is directed to one is unlikely. On the other hand, mode 3, also known as the recycling PPP mode, uses steps from gluconeogenesis; therefore, it only can take place in cells containing fructose biphosphatase. Mode 4 represents the standard operation of the pathway. The non-oxidative PPP is a flexible pathway that is able to adapt to varying cellular needs through the generation of different phosphorylated carbohydrates with three, four, five, or seven carbons [14]. This branch begins with a bifurcation: the ribulose-5P obtained from the oxidative PPP after epimerization is transformed into xylulose 5-phosphate or can isomerize and form ribose-5-phosphate, which can be used for nucleotide synthesis. The main modes of the PPP depending on cellular needs are summarized in Figure 3 [15]. Although the corresponding stoichiometric reaction for each mode is shown, the carbon flux is difficult to quantify in cells [15], and a situation in which all the flux is directed to one is unlikely. On the other hand, mode 3, also known as the recycling PPP mode, uses steps from gluconeogenesis; therefore, it only can take place in cells containing fructose biphosphatase. Mode 4 represents the standard operation of the pathway.
As previously mentioned, G6PD is the key enzyme in the regulation of the PPP. Therefore, any factor able to modify the level or activity of G6PD will determine the flow of the PPP. The "coarse control" of the PPP is carried out by modifications in the levels, location, and activity of G6PD [16]. Factors such as diet composition can induce changes in the synthesis of G6PD [17,18]. An excess of carbohydrates in the diet leads to lipogenesis and the deposition of fat, which is associated with a 5-10-fold increase in G6PD activity in the liver [16]. Accordingly, the expression of the G6PD gene is upregulated by the major transcription factor sterol-responsive element binding protein [19]. High insulin and low glucagon levels, which are associated with this kind of diet, have also been described as regulators of G6PD through the control of mRNA synthesis [20]. The transcription factor Nrf2, which regulates the antioxidant cellular response, also enhances G6PD gene expression [21]. Interestingly, diets rich in polyunsaturated fatty acids (PUFAs) have the opposite effect on G6PD levels.
However, the main regulating factor for G6PD activity is the NADPH/NADP + ratio. NADPH is a competitive inhibitor of G6PD [22], while NADP + is required to maintain the conformation of G6PD and thus its catalytic activity [23]. If the NADPH/NADP + ratio is increased, G6PD activity is lowered, and vice versa. In vitro experiments have shown no G6PD activity with NADPH/NADP + ratios close to 10 [16]. The fact that the NADPH/NADP + ratio in physiological conditions is high (70-300 depending on the tissue) [24] led Egglestone and Krebs to postulate the existence of a mechanism able to modulate the inhibition of G6PD by NADPH [16]. These authors, after testing the effect of over a hundred cell constituents, demonstrated that the physiological concentrations of oxidized glutathione (GSSG) and AMP were able to counteract the inhibition of G6PD by NADPH [16]. The inhibitory effect of GSSG on NADPH could not be attributed to GR activity since a complete inhibition of the enzyme did not abolish the GSSG effect. Recently, a similar role of GSSG in the "fine control" of the PPP has been assigned to a 100 kDa protein named CRING (cofactor that reverses the NADPH inhibition of G6PD). GSSG is required for CRING function. CRING is found in specific tissues including adipose, liver, and adrenal tissues [7]. Finally, the post-translational modification of G6PD also plays a role in its activity, as is the case with G6PD phosphorylation by the nonreceptor tyrosine kinase Src. As a consequence, the inhibition of Src causes reduced G6PD activity in endothelial cells [25]. This mode dominates when the need for R5P is higher than that for NADPH, for instance, in proliferative cells.
In this situation, the glycolytic metabolites 3GP and F6P can be converted in R5P through the reversible non-oxidative PPP. The oxidative PPP and its associated NADPH formation are bypassed. MODE 2: This mode occurs when the needs for NADPH and R5P are balanced. Then, ideally, from one molecule of G6P two molecules of NADPH and a molecule of R5P can be obtained with no generation of glycolytic metabolite. MODE 3: This mode is adopted when the cellular need for NADPH exceeds that for R5P and ATP, for instance, during fatty acid synthesis in adipocytes. The non-oxidative phase of the pathway leads to the conversion of ribulose 5-phosphate to fructose 6phosphate (F6P) and glyceraldehyde 3-phosphate (G3P). Then, these glycolytic metabolitesthrough gluconeogenesis reactions-form G6P, which can enter again into the PPP to produce more NADPH. MODE 4: In this scenario, the cellular need for NADPH and ATP is higher than that for R5P. As described in PPP mode 3, ribulose 5-P is transformed into G3P and F5P through the nonoxidative branch of the PPP; however, in mode 4, these molecules are metabolized to pyruvate through glycolysis, which is associated with ATP formation.
As previously mentioned, G6PD is the key enzyme in the regulation of the PPP. Therefore, any factor able to modify the level or activity of G6PD will determine the flow of the PPP. The "coarse control" of the PPP is carried out by modifications in the levels, location, and activity of G6PD [16]. Factors such as diet composition can induce changes in the synthesis of G6PD [17,18]. An excess of carbohydrates in the diet leads to lipogenesis and the deposition of fat, which is associated with a 5-10-fold increase in G6PD activity in the liver [16]. Accordingly, the expression of the G6PD gene is upregulated by the major transcription factor sterol-responsive element binding protein [19]. High insulin and low glucagon levels, which are associated with this kind of diet, have also been described as regulators of G6PD through the control of mRNA synthesis [20]. The transcription factor Nrf2, which regulates the antioxidant cellular response, also enhances G6PD gene expres- This mode dominates when the need for R5P is higher than that for NADPH, for instance, in proliferative cells. In this situation, the glycolytic metabolites 3GP and F6P can be converted in R5P through the reversible non-oxidative PPP. The oxidative PPP and its associated NADPH formation are bypassed. MODE 2: This mode occurs when the needs for NADPH and R5P are balanced. Then, ideally, from one molecule of G6P two molecules of NADPH and a molecule of R5P can be obtained with no generation of glycolytic metabolite. MODE 3: This mode is adopted when the cellular need for NADPH exceeds that for R5P and ATP, for instance, during fatty acid synthesis in adipocytes. The non-oxidative phase of the pathway leads to the conversion of ribulose 5-phosphate to fructose 6phosphate (F6P) and glyceraldehyde 3-phosphate (G3P). Then, these glycolytic metabolites-through gluconeogenesis reactions-form G6P, which can enter again into the PPP to produce more NADPH. MODE 4: In this scenario, the cellular need for NADPH and ATP is higher than that for R5P. As described in PPP mode 3, ribulose 5-P is transformed into G3P and F5P through the non-oxidative branch of the PPP; however, in mode 4, these molecules are metabolized to pyruvate through glycolysis, which is associated with ATP formation.
Apart from the NADPH/NADP + ratio, several other factors have been described as positive or negative regulators of G6PD; they are included in the last section of this manuscript.
Loss of Function Models for G6PD
G6PD deficiency is the most common human enzymopathy. It is very heterogeneous and was first described in humans by Marks and Gross in 1959 [26]. Approximately 400 million people worldwide carry a mutation in the G6PD gene, which causes an enzyme deficiency. Deficient alleles are prevalent in South and North America and in northern Europe [27]. However, the highest prevalence of this enzymopathy is reported in Africa, the Middle East, the central and southern Pacific Islands, southern Europe, and southeast Asia. The global distribution of the G6PD deficiency is strikingly similar to that of malaria. In areas where G6PD deficiency is common, Plasmodium Falciparum malaria is endemic, supporting the so-called malaria protection hypothesis [28]. Epidemiological evidence for the association between G6PD deficiency and a reduction in the risk of severe malaria [29] has been accompanied by the results of in vitro work showing that parasite growth is slowest in G6PD-deficient cells [28].
It has also been shown that G6PD-deficient red blood cells (RBCs) infected with parasites undergo macrophage-induced phagocytosis at an earlier stage of Plasmodium Falciparum maturation than normal RBCs. This could be a further protective mechanism against malaria [30]. The vulnerability of RBCs to mutant G6PD may reflect their lack of mitochondria and thus their inability to endogenously produce the substrates for malic enzyme and isocitrate dehydrogenase [8]. This may also reflect RBCs lack of nuclei and failure to replace the deficient G6PD protein as the cells age.
The G6PD gene is located at the telomeric region in the X chromosome. Thus, its deficiency is an X-linked hereditary defect that causes variants with different clinical phenotypes (about 140 mutations have been described). The G6PD-encoding gene has been well preserved throughout evolution [31]. As a monomer, the protein is inactive; however, as a dimer or tetramer, it is active. In its catalytic center, there is an amino acid sequence that binds to NADPH. The deficiency is caused by protein instability due to amino acid substitutions in different enzyme locations [28]. The diagnosis of G6PD deficiency is based on the spectrophotometric quantification of the enzyme's activity [32]. There are five categories of G6PD deficiency based on clinical manifestations and enzyme activity (Table 1) [28]. The most frequent clinical manifestations of G6PD deficiency are acute and chronic hemolytic anemia and neonatal jaundice [28]. The prevention of hemolysis by avoiding oxidative stress represents the most effective management of G6PD deficiency. Oxidative stress can be triggered by agents such as drugs (primaquine, sulfonamide, or acetanilide), infections (hepatitis viruses, cytomegalovirus, or pneumonia), or the ingestion of fava beans (favism). Favism is a hemolytic response to the consumption of fava beans that takes place in some individuals with G6PD deficiency [33]. Isouramil, divicine, and convicine are thought to be the toxic constituents of fava beans that lead to the onset of the clinical manifestations of deficiency [28]. The mechanism by which increased sensitivity to oxidative damage leads to hemolysis has not been fully elucidated [34].
G6PD is ubiquitously expressed in mammalian cells, with the highest expression observed in the immune cells, testes, adrenals, and brain [37]. It is often upregulated in tumors [9,38]. The enzyme is subject to tissue-specific transcriptional regulation, which in turn is correlated with the methylation of specific sites in the gene [37]. We recently found that immune cells, and especially T cells, are dependent on G6PD to maintain NADPH levels and effector functions [8]. Activated T cells do not express substantial levels of malic enzyme or isocitrate dehydrogenase and produce NADPH mainly through the PPP, which is sharply upregulated during T cell activation and is related to pro-inflammatory cytokine production [8]. Thus, severe G6PD mutations that affect the enzyme's catalytic ability can present as immune deficiency [39].
Favism has a higher incidence in males than females [28]. Males are hemizygous for the G6PD gene and thus can have normal gene expression or be G6PD deficient. Females, with two copies of the G6PD gene on each X chromosome, can have normal gene expression, be homozygous, or be heterozygous. Heterozygous females can achieve the same degree of G6PD deficiency and can be susceptible to the same pathophysiological phenotype present in G6PD-deficient males. However, heterozygous women on average have less severe clinical manifestations than G6PD-deficient males [28].
The World Health Organization Scientific Group has emphasized the need to develop animal research models for this frequent human hereditary disorder. Genetically, G6PD knockout mice are not viable [40]. Mouse viability is dependent on G6PD activity, as evidenced by a decrease in litter size corresponding to a decrease in G6PD activity [41].
In 1988, Merkle and coworkers created the first X-linked G6PD deficient animal model using 1-ethyl-l-nitrosourea-induced chemical mutagenesis [42]. Williams' research team reported a single point mutation (A to T transversion) at the 3 end of exon 1 that explained the decrease in G6PD activity in the G6PD-deficient mice [43].
Heterozygous, hemizygous, and homozygous mutants have~60%,~15%, and~15% of remaining precipitate activity in RBCs, respectively, when compared to wild type (WT) mice. Therefore, in comparison with the human classification of G6PD mutations, the mouse mutant falls into class III (mild mutation severity) with respect to its hematological and biochemical characteristics.
Using this model, it has been shown that mild G6PD deficiency (15% activity of WT) induces a pronounced decrease in RBC deformability and worsens erythrocyte dysfunction during sepsis. RBC dysfunction aggravates organ dysfunction and microcirculatory disturbances and may also contribute to the modulation of macrophage responses during severe infections in G6PD-deficient animals [43,44].
A significant number of studies have unveiled the roles of G6PD in various aspects of physiology other than erythrocytic pathophysiology, such as diabetes, cardiovascular disease, and neurodegeneration [45]. The association between G6PD deficiency and the development of diabetes has been supported by epidemiological studies conducted in different research groups and populations [46][47][48]. An increased risk for diabetes, and also of diabetic complications such as proliferative retinopathy [49], has been reported in G6PD-deficient subjects [50,51].
In preclinical studies, it has been shown that the liver and pancreas of diabetic rats show a reduction in G6PD activity [52]. Pancreatic islets from G6PD mutant mice are smaller than those of WT mice [53], which suggests that G6PD plays important roles in the survival and functions of pancreatic cells. Accordingly, it has been reported that mutations in the G6PD gene and the consequent drop in G6PD activity are sufficient to cause changes similar to those seen in diabetic mice [54]. Using the opposite methodological approach, we found that G6PD transgenic (Tg) mice moderately overexpressing the enzyme (2-4-fold overexpression) were more insulin sensitive and glucose tolerant than WT controls [9]. These results are in accordance with previous mouse overexpression models of NADPHdependent ROS-detoxifying enzymes. For instance, Prx3-Tg and Prx4-Tg mice were shown to have better insulin sensitivity and glucose tolerance compared to WT mice [55]. Although the molecular mechanism underlying the association between G6PD deficiency and diabetes is not completely understood, current evidence suggests that G6PD deficiency may be a risk factor for diabetes, with higher odds among men compared to women [46,47]. The role of ROS as physiological signals as well as pathological stresses has been demonstrated repeatedly in the cardiovascular system [56][57][58][59]. However, the relation between G6PD deficiency and risk for cardiovascular disease and subsequent outcomes is unclear. The existing data indicate a complex interplay in which the adverse effects of G6PD deficiency may outweigh the potential protective effects in the context of cardiac stress [34,[60][61][62].
The risk of redox-mediated damage to brain cells in G6PD deficiency has also been studied [63]. G6PD is an important enzyme in the protection against age-associated ROS neurodegenerative effects, and more specifically in the age-associated increase in oxidative DNA damage in the brain [63]. Recently, brain damage associated with ROS production in G6PD-deficient animals was also found to have functional consequences. Old G6PDdeficient male mice exhibited synaptic dysfunction in their hippocampal slices while young and old G6PD-deficient females exhibited deficits in executive functions and social dominance [64].
Taken together, these results suggest that there are broad health implications of G6PD deficiencies. Among the potential outcomes related to G6PD loss of function, birth defects, heart disease, diabetes, and neurodegeneration are highlighted.
G6PD and Cell Growth
The modulation of cell survival and cell growth relies on intracellular redox regulation [65]. As mentioned in the previous sections of this manuscript, NADPH-the principal intracellular reductant-is a critical modulator of redox potential. In 1999, Dr. Stanton and coworkers found that G6PD plays an important role in cell death by regulating intracellular redox levels [66]. The inhibition of G6PD by both dehydroepiandrosterone (DHEA) and 6-aminonicotinamide (6-ANAD) augmented cell death triggered by serum deprivation and oxidative stress, while the overexpression of G6PD in a cell line conferred resistance to H 2 O 2 -induced cell death. Previously, in G6PD-deficient cell lines, it was reported that these cells had decreased cloning efficiencies and growth rates and were highly sensitive to ROS when compared to cells expressing endogenous levels of the enzyme [67]. Consistent with these results, an association between the stimulation of cell growth in different tissues and increased PPP activity has also been reported [68]. Kidney hypertrophy due to unilateral nephrectomy is associated with increased G6PD activity [69], while the growth of rat liver cells stimulated by growth hormone is also associated with an increase in G6PD activity [70].
In experiments to determine if the increased G6PD activity per se is an essential component of normal cell growth, it was found that G6PD activity was directly correlated with cell growth, that the inhibition of G6PD activity prevented growth, and that the overexpression of G6PD alone stimulated [3H]-thymidine incorporation [65].
As previously mentioned, cancers and cultured tumor cells exhibit large increases in G6PD activity [71]. To test the potential tumorigenic risk of G6PD overexpression, we crossed G6PD-Tg mice with several genetically modified tumor-prone animals, including ATM-KO (that develop T-cell lymphomas), Eµ-myc (that develop B-cell lymphomas), p53-KO (that develop T-cell lymphomas and sarcomas), and MMTV-PyMT (that develop mammary tumors) [9,10]. In all of these combinations, mice carrying a G6PD-Tg allele showed the same tumor latency and incidence as the WT mice. These data indicate that a moderate and regulated increase in NADPH levels or G6PD expression and activity does not result in increased tumor incidence [9]. On the contrary, G6PD-Tg mice showed improved lifespan and health parameters as they grew old: (i) the transgenic animals were more insulin sensitive and glucose tolerant; (ii) old G6PD-Tg mice tended to gain less weight and exhibited improved motor coordination; and (iii) G6PD-Tg females showed a~14% increase in medium lifespan. At the molecular level, we found a reduction in age-associated lipid peroxidation and DNA oxidation in different tissues [9]. We related the decrease in age-associated oxidative damage to macromolecules, a result of the modulation of cellular NADPH levels, to the improvements in health and lifespan in the G6PD-Tg animals. The treatment of both animals and humans with antioxidant vitamins and other supplements [9], specially at high doses, has not been shown to increase lifespan and has failed to protect against age-induced pathologies. Studies on the biological roles of ROS have uncovered the beneficial signaling functions of these highly reactive molecules to explain these contradictory results [10]. The overexpression of antioxidant enzymes vs. the administration of exogenous antioxidants are very different approaches to test the importance of redox balance both in aging and age-associated diseases with very different outcomes [10,55].
G6PD in the Regeneration of Skeletal Muscle after Damage
The hexose monophosphate shunt is considered an almost negligible pathway in normal muscle. For this reason, the function of G6PD in skeletal muscle has been poorly investigated.
In vitro studies have shown that, under normal conditions, glucose breakdown takes place via both the Embden-Meyerhof pathway and the PPP in the liver, pancreas, arterial wall, kidney, spleen, and adrenals. However, in the central nervous system and cardiac and striated muscle, it is metabolized mainly via the glycolytic route [72]. In addition, several conditions increase the activity of the PPP in skeletal muscle: (i) embryogenesis [73]; (ii) denervation; (iii) ischemia; (iv) hypertrophy; (v) the injection of myonecrotic agents with local degeneration effects [74,75]; and (vi) physical exercise [32].
The injection of myonecrotic agents (bupivacaine, Marcaine, or cardiotoxin) induces a rapid (8 h) and dramatic (6-9-fold) increase in the activities of G6PD and 6PGD during regeneration after muscle destruction. By using histological techniques [76,77], it has been shown that G6PD is localized within muscle cells in regenerating muscle; thus, the enhanced enzyme activity resides in the muscle fibers themselves for at least the first 6-8 h after Marcaine injection. After that time, phagocytic cells contribute to the increase in enzyme activity [74]. The enhanced activities of G6PD and 6PGD likely reflect accelerated glucose utilization for the production of nucleic acids and lipids [75,[78][79][80]. In this regard, increased quantities of RNA have been noted in a number of studies on muscle regeneration [81][82][83]. The enhancement of the PPP is important for anabolic processes in the initial stages of skeletal muscle regeneration; however, the role of G6PD in skeletal muscle goes beyond biosynthetic processes. In 2016, Febbraio and coworkers found that one mechanism linking an altered cellular redox state to insulin resistance is NOS [84]. S G6PD activity in skeletal muscle is linked to nitric oxide (NO) bioavailability; thus, an impairment in the NOS isozyme (nNOSµ) in insulin resistant states in rodents and humans leads to an increase in G6PD activity [84].
The consequences of G6PD deficiency in skeletal muscle have been studied in clinical cases of rhabdomyolysis [85] and myopathies [86]. In fact, a statistically significant relationship has been found with regard to the activity of G6PD between RBCs and muscle in humans [87].
Positive Regulators of G6PD Activity in Skeletal Muscle-Role of Exercise
As previously mentioned, G6PD overexpression in Drosophila Melanogaster and mice protects against metabolic stress [9,88] and oxidative damage [9]. Very recently, we found that it also delays the onset of frailty by protecting against muscle damage [32].
As shown in Table 2, G6PD can be regulated by pharmacological, nutritional, and physiological interventions, such as physical exercise [68].
G6PD activity has been studied in both skeletal muscle and erythrocytes after one bout of exhaustive exercise. Surprisingly, contradictory results were found in the literature.
G6PD activity in erythrocytes is reduced in humans after one bout of high intensity exercise (~40%), likely due to ROS generation [89]. Accordingly, supplementation with L-cysteine for a week (0.5 g/24 h) [89] or with α-Tocopherol for a month (200 mg/24 h) [90] leads to the maintenance of the enzyme activity. These results have also been verified in long distance runners [91] and soccer players [91,92]. To the contrary, the highest relative increase in enzyme activities, both for mitochondrial and extramitochondrial enzymes, after exhaustive swimming in rat skeletal muscle was shown for G6PD and 6PGD, which increased by 115% and 40%, respectively, 1 and 3 days after an acute bout of exercise [93]. Similarly, an increase in muscle G6PD activity of~100-350% was observed after a downhill running protocol in rats [94], suggesting that the activation of the PPP occurs in skeletal muscle to provide substrates for muscle repair.
In one study, the changes in G6PD expression in skeletal muscle associated with different exercise intensities were investigated [95]. Based on the lactate threshold, it was shown that low-intensity aerobic treadmill running induced higher increases in the mRNA levels of G6PD in rat soleus muscle when compared to high-intensity anaerobic running [95]. Exercise duration is also a critical factor in the activation of G6PD in skeletal muscle. A significant linear correlation has been reported between the duration of downhill running (0, 30, or 90 min) and G6PD activity in different muscle groups in untrained rats [94].
G6PD activity also shows a susceptibility to exercise training in skeletal muscle. The exercise-induced elevation in muscle G6PD activity after one bout of downhill running was shown to be significantly reduced with only 5 days of either level or downhill training in rats [94]. This is the reason why changes in skeletal muscle G6PD activity have been widely used to study the "repeated bout effect", which refers to an adaptation whereby a single bout of eccentric exercise protects against muscle damage from subsequent eccentric bouts [94,96].
The activity of G6PD and 6PDG increases pronouncedly by a factor of three in the gastrocnemius muscle after 5 days of repeated ischemia [97]. A similar increase in the activities of PPP enzymes has also been found in the heart after myocardial infarction [98]. Again, these results suggest that the increase in G6PD activity is important for repair purposes, as it increases the production of NADPH and the pentoses necessary for biosynthetic processes.
The PPP has been proven to be a fundamental metabolic pathway that allows for rapid and robust hypertrophic growth in muscle cells in response to mechanical overload [99]. For example, the denervation of one half of the diaphragm was shown to induce transient hypertrophy in the muscle on the other side [100]. In this model, the activities of G6PD and 6PDG increased immediately after denervation, reaching a maximum after 3 days [100]. More recently, the importance of G6PD in the regulation of skeletal muscle metabolism during hypertrophy was highlighted in a study analyzing gene expression from a transcriptomic microarray of specific metabolic pathways in mechanically overloaded plantaris muscle-induced hypertrophy [99]. A robust increase in G6PD mRNA expression was found in the overloaded muscle throughout the whole analyzed time course (1, 3, 5, and 7 days), consistent with an increase in NADPH levels to support nucleotide biosynthesis and to boost the muscle antioxidant defense [99]. It was also shown that the abundance of the G6PD protein significantly increased (~140%) in response to 5 days of mechanical overload in muscle [101].
The "MyoMouse" is a conditional mice model that inducibly expresses an activated form of Akt1 specifically in skeletal muscle [102]. The induction of the Akt1 signaling pathway leads to selective hypertrophy in type II fibers and an increase in muscle strength [102,103]. A combination of metabolomic and transcriptomic analyses has shown that Akt1-induced muscle growth is accompanied by a robust upregulation of biosynthetic metabolic pathways, such as the PPP, and the downregulation of catabolic pathways, such as glycolysis and oxidative phosphorylation [103]. Specifically, the "MyoMouse" shows a 3.5-fold increase in G6PD and a 2.3-fold increase in 6PDG in the hypertrophied muscles. Consistent with an increase in metabolite flux through the PPP, a 1.8-fold accumulation of R5P, an increase in total RNA, and an increase in purines and pyrimidine metabolites, including 5-aminoimidazole-4-carboxamide ribonucleotide (AICAR) and xanthosine, were also reported in the muscle tissue [103].
A potential limitation of the studies discussed above is the fact that all the analyses encompassed the whole muscle and could not distinguish the contribution of non-muscle cell types to the observed changes in G6PD expression, protein levels, or activity.
It has been suggested that the accumulation of cells in the connective tissue rather than changes in the activity within muscle fibers may explain the increase in the activity of the PPP enzymes in skeletal muscle following injury [104]. Macrophage, neutrophil, and mast cell levels are elevated after exercise and in mechanically overloaded muscles [104,105], which may influence the reported activity of G6PD.
The development of in vitro studies in C2C12 myoblasts has helped to overcome this concern. The overexpression of G6PD in C2C12 G6PD cells promotes their proliferation and significantly increases the percentage of EdU-positive cells. To the contrary, G6PD inhibition in myoblasts induces cell cycle arrest in the G0/G1 phase and suppresses muscle cell proliferation [106]. Moreover, the high-frequency electrical stimulation of C2C12 myotubes-mimicking muscle contraction-increases the expression of genes encoding the enzymes of the PPP [107].
The results reported by McCarthy's research team also provide evidence that most of the observed changes in gene expression reflected in skeletal muscle occur within muscle fibers themselves [108]. These authors showed that myofibers are overwhelmingly the most transcriptionally active cell type in skeletal muscle at rest and during muscle hypertrophy. Approximately 90% of nascent RNA is associated with myonuclei during a mechanical overload induced by synergist ablation.
The results reported by Shimokawa and coworkers also support the idea that the exercise-induced increase G6PD activity is muscle specific and independent of inflammatory cells [95]. They found an increase in the mRNA expression of G6PD with aerobic exercise in rat soleus muscle, while this increment was absent in the animals following an anaerobic protocol. Macrophage invasion and injured and regenerating fibers were observed after anaerobic exercise, while neither of these signs of damage were found after the aerobic protocol [95] Finally, testosterone [109] and growth hormone-induced muscle fiber hypertrophy in aging [110], two treatments that are independent of inflammatory signals, have been associated with an increase in G6PD protein levels in skeletal muscle.
These data suggest that muscle adaptations to exercise training or to mechanical overload require enhanced redox metabolism via the production of NADPH through the PPP and an increase in the expression of G6PD [99,111].
Physical exercise acutely increases ROS generation; however, if practiced regularly, it induces positive adaptations in mitochondrial density [112][113][114][115] and antioxidant defenses, including increased G6PD enzymatic activity [115][116][117]. Growing evidence suggests that physical training upregulates the level of antioxidant enzymes in the tissues actively involved in exercise [118,119]. For instance, eccentric exercise training in mice lasting 5 days was shown to be enough to induce an increase in G6PD mRNA levels and activity in skeletal muscle in young animals [32], similar to that found in a transgenic mouse model moderately overexpressing G6PD [9].
The age-associated loss in muscle mass and strength (i.e., sarcopenia) leads to a decrease in G6PD activity and protein content in skeletal muscle [120]. Whether the wellknown positive effects of exercise training in old individuals are mediated through an increase in G6PD activity should be further studied in depth. The results published to date are contradictory and do not allow definitive conclusions to be drawn [121][122][123][124].
Finally, the question of whether exercise training is a safe and useful intervention in G6PD-deficient patients is something that has been an object of debate. G6PD-deficient individuals, as previously mentioned, are less protected against oxidative stress and could be predisposed to oxidative damage when they perform high-intensity physical training [125]. However, several studies have shown that exercise intensity does not cause oxidative stress or hemolysis above those levels expected in people without G6PD deficiency [126][127][128][129]. Therefore, despite the limited published studies, it seems that G6PD-deficient patients can safely participate in physical exercise programs with different intensities and durations. | 8,216.6 | 2022-09-28T00:00:00.000 | [
"Medicine",
"Biology",
"Environmental Science"
] |
Control sets for bilinear and affine systems
For homogeneous bilinear control systems, the control sets are characterized using a Lie algebra rank condition for the induced systems on projective space. This is based on a classical Diophantine approximation result. For affine control systems, the control sets around the equilibria for constant controls are characterized with particular attention to the question when the control sets are unbounded.
Introduction
We will study controllability properties of affine control systems of the forṁ Controllability properties of bilinear and affine control systems have been intensely studied in the last 50 years. The classical monograph by Mohler [20] contains sufficient conditions for complete controllability and many applications of bilinear control B Fritz Colonius<EMAIL_ADDRESS>1 Institut für Mathematik, Universität Augsburg, Augsburg, Germany 2 Departamento de Matemática, Universidade Estadual de Maringá, Maringá, Brazil systems. The monograph Elliott [13] emphasizes the use of matrix Lie groups and Lie semigroups and contains a wealth of results on the control of bilinear control systems.
Motivated by the Kalman criterion for controllability of linear systems, an early goal was show that controllability of bilinear control systems (without control restrictions) has an algebraic characterization. This hope did not bear out, in spite of many partial results. The present paper is mainly concerned with the analysis of control sets, that is, maximal subsets of complete approximate controllability in R n , cf. Definition 2.1 and Colonius and Kliemann [10] for a general theory.
The main result of Do Rocio et al. [12,Theorem 1.3] concerns a connected semigroup S with nonvoid interior in an affine group G = B V , where V is a finite dimensional vector space and B is a semisimple Lie group that acts transitively on V \ {0}. If the linear action of the canonical projection π(S) on B is transitive on V \ {0}, then the affine action of S on V is transitive. This improves an earlier result in [17]. An application to an affine control system of the forṁ where A, B ∈ sl(2, R) and a, b ∈ R 2 , results in a sufficient controllability criterion in terms of these parameters. Answering a question by Sachkov [22], Do Rocio et al. [11] prove that systems of the form (1.2) with a = b = 0 and unrestricted control may not be completely controllable on R n \ {0} while there is no nontrivial proper closed convex cone in R n which is positively invariant. For the relation to the results in the present paper see Remark 3.17 and also Proposition 5.14.
Our results on control sets will also yield some results on controllability on R n . We do not restrict our attention to the situation where the system semigroup has nonvoid interior in the system group. Correspondingly, our main results are not based on methods for semigroups in Lie groups.
In the first part of this paper we discuss control sets for homogeneous bilinear systems which are a special case of (1.1) with c 1 = · · · = c m = d = 0. It is well known that, for this class of systems, one can separate controllability properties into properties concerning the angular part on the unit sphere S n−1 and the radial part. In particular, by Bacciotti and Vivalda [2, Theorem 1] the induced system on projective space P n−1 is controllable if and only if the induced system on S n−1 is controllable.
Theorem 3.2 shows that every control set S D with nonvoid interior on S n−1 induces a control set D on R n \ {0} given by the cone generated by S D provided that exponential growth and decay can be achieved. Here we use a classical result on Diophantine approximations which allows us to require only the accessibility rank condition on S n−1 in the interior of S D. This result is illustrated by two-dimensional examples. For systems satisfying the accessibility rank condition on projective space, the control sets on the unit sphere and on R n \ {0} are characterized in Theorem 3.12 and Theorem 3.15, respectively. We remark that under the accessibility rank condition in R 2 , a complete description of the control sets and of controllability is given in Ayala et al. [1]. Corollary 3.21 characterizes controllability on R n \ {0} for systems satisfying only the accessibility rank condition on P n−1 using a recent result by Cannarsa and Sigalotti [7,Theorem 1] which shows that here approximate controllability implies controllability.
In the second part we analyze control sets for general affine systems and their relation to equilibria. If the systems linearized about equilibria are controllable, Theorem 5.6 shows that any pathwise connected set of equilibria is contained in a control set. Additional assumptions on spectral properties of the matrices A(u) = A + m i=1 u i B i , u ∈ , allow us to get more detailed information. In particular, if 0 is an eigenvalue of A(u 0 ) for some u 0 ∈ , one finds an unbounded control set, cf. Theorem 5. 13. The main open problem for control sets of affine systems is, if every control set contains an equilibrium.
The contents of this paper are as follows. Section 2 describes basic properties of nonlinear control systems and control sets as well as some notation for bilinear and affine control systems. Section 3 discusses homogeneous bilinear control systems using their projection to the unit sphere. Section 4 briefly describes equilibria of affine systems and Sect. 5 presents results on control sets around such equilibria.
Preliminaries
In this section we introduce some terminology and notations for control-affine systems and discuss special cases of affine control systems.
Control sets
Control-affine systems on a smooth manifold M have the forṁ where f 0 , f 1 , . . . , f m are smooth vector fields on M and the control range ⊂ R m is compact with 0 ∈ int ( ). We assume that for every initial state x ∈ M and every control function u ∈ U there exists a unique solution ϕ(t, x, u), t ∈ R, satisfying ϕ(0, x, u) = x of (2.1) depending continuously on x. The system with u ≡ 0 given byẋ is called the uncontrolled system. It generates a continuous flow ϕ t on M. For the general theory of nonlinear control systems we refer to Sontag [24] and Jurdjevic [18].
The set of points reachable from x ∈ M and controllable to x ∈ M up to time T > 0 are defined by resp. Furthermore, the reachable set (or "positive orbit") from x and the set controllable to x (or "negative orbit" of x) are have nonvoid interior for all T > 0 and the system is called locally accessible if this holds in every point x ∈ M. This is guaranteed by the following accessibility rank condition is the subspace of the tangent space T x M corresponding to the vector fields, evaluated in x, in the Lie algebra generated by f 0 , f 1 , . . . , f m . The trajectories for the convex hull of can be uniformly approximated on bounded intervals by the trajectories for . Furthermore, trajectories for controls in U can be uniformly approximated on bounded intervals by trajectories for piecewise constant controls in U pc .
The following definition introduces subsets of complete approximate controllability which are of primary interest in the present paper. Definition 2.1 A nonvoid set D ⊂ M is called a control set of system (2.1) if it has the following properties: (i) for all x ∈ D there is a control function u ∈ U such that ϕ(t, x, u) ∈ D for all t ≥ 0, (ii) for all x ∈ D one has D ⊂ O + (x), and (iii) D is maximal with these properties, that is, if D ⊃ D satisfies conditions (i) and (ii), then D = D.
A control set D ⊂ M is called an invariant control set if D = O + (x) for all x ∈ D. All other control sets are called variant.
If the intersection of two control sets is nonvoid, the maximality property (ii) implies that they coincide. If the system is locally accessible in all . The control sets with nonvoid interior for piecewise constant controls in U pc coincide with those for controls in U. For these and further properties of control sets, we refer to Colonius and Kliemann [10,Chapters 3 and 4].
The following lemma shows that the controllable set of system (1.1) coincides with the reachable set of the time reversed system. Lemma 2.2 Consider together with system (2.1) the time reversed systeṁ .
The other inclusions follow analogously.
Affine and bilinear control systems
Frequently, we abbreviate hence the columns of C are given by the c i . Then (1.1) can be written aṡ A special case are bilinear control systems obtained for d = 0, i.e.
and homogeneous bilinear systems of the forṁ For fixed control u ∈ U (1.1) is a nonautonomous inhomogeneous linear differential equation. Denote by u (t, s) ∈ R n×n the principal matrix solution, i.e., the solution of The solutions ϕ(t, x 0 , u), t ∈ R, of (1.1) with initial condition ϕ(0, x 0 , u) = x 0 ∈ R n are given by and, in particular, the solutions of (2.7) are This readily implies for α ∈ R
Control sets for homogeneous bilinear systems
We consider homogeneous bilinear control systems of the form (2.7 ) and describe their control sets. Since for fixed control u, the corresponding differential equations are homogeneous, their controllability properties can often be split into controllability properties for the angles and the radii separately; cf., e.g., Colonius and Kliemann [10,Chapter 7]. Denote the projection of R n to the Euclidean unit sphere S n−1 by π and the projection to real projective space P n−1 (obtained by identifying opposite points on the sphere) by P. For a trajectory of (2.7) define The projected trajectories are trajectories of control-affine systems on S n−1 given bẏ The vector fields of the system on S n−1 are obtained by subtracting the radial component. The solutions will be denoted by s(t, s 0 , u), t ∈ R. One also obtains an induced control system on projective space P n−1 with vector fields Ph(u, ·) since h i (s) = −h i (−s) for all i. Since bilinear control systems as well as their projections to S n−1 and P n−1 are analytic, for these systems, local accessibility is equivalent to the corresponding accessibility rank condition (2.3); cf. Sontag [24, Theorem 12 on p. 179].
We note the following simple result showing a first relation between control sets on R n and control sets on S n−1 .
Proposition 3.1
Suppose that D ⊂ R n \ {0} is a control set of system (2.7). Then the projection P(D) to projective space P n−1 is contained in a control set P D for the induced system on P n−1 , and the projection π(D) to the unit sphere S n−1 is contained in a control set S D for the induced system (3.1) on S n−1 . If D has nonvoid interior, then also P D and S D have nonvoid interiors.
Proof The assertions immediately follow from the definitions and the fact that the projections π and P are open.
Next we will analyze when a control set on the unit sphere S n−1 generates a control set on R n . This result is based on a Diophantine approximation result used for Lemma 3.4. Theorem 3.2 Let S D be a control set with nonvoid interior for the system on the unit sphere S n−1 and suppose that Then the cone {αs ∈ R n |α > 0, s ∈ S D } is a control set in R n with nonvoid interior.
Proof First observe that (3.2) implies for the projected system on S n−1 Hence we get periodic solutions in int ( S D) ⊂ S n−1. .
Step 1: Let s 0 ∈ int ( S D). Then for every x 0 ∈ l := {αs 0 ∈ R n |α > 0 } the closure of the reachable set from x 0 contains the half-line l.
For the proof of this claim, consider arbitrary points x 0 = α 0 s 0 , x 1 = α 1 s 0 ∈ l with α 0 , α 1 > 0. The strategy is to steer the system from s 0 to s + , then to go k times through the periodic trajectory for u + , then to steer the system to s − , go times through the periodic trajectory for u − , and finally steer the system back to s 0 . The numbers k, ∈ N will be adjusted such that the corresponding trajectories in R n starting in x 0 approach x 1 . By local accessibility in int ( S D) there are times τ 1 , τ 2 , τ 3 > 0 and controls v 1 One finds for the system in R n numbers β 1 , β 2 , β 3 > 0 with The corresponding trajectory on S n−1 is periodic and satisfies s(τ 1 + iσ + , s 0 , w k, ) = s + for i = 0, 1, . . . , k, s(τ 1 + kσ + + τ 2 + iσ − , s 0 , w k, ) = s − for i = 0, 1, . . . , , and for the corresponding trajectory on R n one finds using (2.8) Recall that our goal is to reach x 1 = α 1 s 0 approximately. We apply Lemma 3.4 with a = α + , b = α − −1 , and c = α 1 (β 3 β 2 β 1 ) −1 , where we choose α + ∈ (α + 0 , α + 0 +δ 0 ) such that log b log a = − log α − log α + is irrational. Thus for every ε > 0 there are k, ∈ N with hence for all ε > 0 there are k, ∈ N with It follows that for some δ ∈ (−ε, ε) one can choose k, such that Since ε > 0 is arbitrary, it follows that x 1 = α 1 s 0 is in the closure of the reachable set of x 0 and hence l is contained the closure of the reachable set from x 0 .
Step 2: Let x 1 , x 2 ∈ {αs ∈ R n |α > 0, s ∈ S D }, hence there are α 1 , α 2 > 0 and s 1 , s 2 ∈ S D with x 1 = α 1 s 1 and x 2 = α 2 s 2 . Then there are a control u 1 and a time t 1 ≥ 0 with s(t 1 , s 1 , u 1 ) = s 0 , hence ϕ(t 1 , x 1 , u 1 ) = γ 1 s 0 ∈ l for some γ 1 > 0. Since s 0 , s 2 ∈ S D one finds, for ε > 0, a control u 2 and a time t 2 ≥ 0 such that, for The trajectory in R n satisfies ϕ(t 2 , s 0 , u 2 ) = γ 2 s 3 for some γ 2 > 0. By (2.8) it follows that Step 1 implies that one finds arbitrarily close to α 2 γ 2 s 0 ∈ l points in the reachable set from γ 1 s 0 , hence in the reachable set from x 1 . By continuous dependence on the initial value, it follows that under the control u 2 points in the reachable set from x 1 are steered into the ε-neighborhood of x 2 . Since ε > 0 is arbitrary, this shows that x 2 is in the closure of the reachable set from x 1 .
Step 3: We have shown that the cone D := {αs ∈ R n |α > 0, s ∈ S D } is a set of complete approximate controllability. It is maximal with this property, since any set of approximate controllability in R n projects to a set of approximate controllability in S n−1 , and S D is a maximal set of approximate controllability. Finally, for every point Hence the cone D is a control set and it has a nonvoid interior.
Step 1 in the proof above is based on the following lemma which uses a Diophantine approximation property.
Lemma 3.4 Let a, b, c be real numbers with a
Proof Since the logarithm is continuously invertible, it suffices to show that for every We use the following Diophantine approximation result which is due to Tchebychef [25, Théorème, p. 679]: For any irrational number α and any β ∈ R the inequality x |y − αx − β| < 2 has an infinite number of solutions in x ∈ N, y ∈ Z. Observe that here also y ∈ N if α > 0, since then sgn(y) = sgn(αx) = sgn(x) = 1. For an application to the problem above, has an infinite number of solutions k, ∈ N. Choosing large enough such that 2 log a < ε and dividing by one gets, as desired,
Remark 3.5
The Diophantine approximation result used above is closely related to a theorem due to Minkowski on inhomogeneous linear Diophantine approximation, cf. Cassels, [8, Theorem I in Chapter III]. Here the existence of integers x, y solving x |y − αx − β| < 1 4 is established, but not the existence of infinitely many pairs x, y with this property, as required for the proof above.
The following two examples illustrate Theorem 3.2. We consider problems in R 2 where the induced system on the unit circle is not locally accessible. First let A be given in Jordan normal form A = λ 1 0 0 λ 2 and let the matrices B 1 and B 2 be diagonal.
The situation is a bit more complicated than in Remark 3.6, since the intersections of the relevant eigenspaces with the unit sphere yield boundary points of the control set S D.
Example 3.7 Consider a system of the form This can be written as In order to verify condition (3.2), assume that there is ( Let τ 1 > 0, and define τ 2 := τ 1 Fix a point s + ∈ S D i . Then it follows that Since τ 1 > 0 is arbitrary, the first equality in (3.2) holds with σ + = τ 2 + τ 1 and Define, with τ 1 > 0 and τ 3 := τ 1 Then it follows that Thus also the second equality in (3.2) holds with σ − = τ 3 +τ 1 and α − = e τ 1 μ 1 (u 2 ,v 2 ) < 1. Now Theorem 3.2 implies that there are four control set in R 2 given by the interiors of the four quadrants.
Observe that conditions (3.4), (3.5), and (3.6) are satisfied in the simple example and one may choose The next example shows that the situation is quite different if A is a two-dimensional Jordan block; in particular, scalar controls suffice to verify assumption (3.2) in Theorem 3.2 for a control set S D = S 1 .
Example 3.8 Consider
with λ ∈ R and u(t) ∈ . The system can be written as For all u ∈ the eigenvalue μ(u) = λ + b 11 u has the eigenspace R × {0}. The intersection of the unit circle with the eigenspace is given by {(1, 0) , (−1, 0) }, which are fixed under any control for the projected system. Suppose that b 12 = 0 and contains the two points u 1 := 0 and u 2 := −2/b 12 , and write μ 1 = μ(u 1 ) = λ and Thus we consider the two differential equations The solutions of (3.8) are given by It follows that Similarly, we can find conditions for α − < 1: The control sets on the unit sphere do not change if we add a third control value u 3 which will be specified in a moment. Repeating the derivation above, we find with This is equivalent to We conclude that condition and u 3 satisfying (3.9). Then there are two invariant control sets with nonvoid interior in R 2 given by the open upper and lower half-planes. Observe that these conditions hold, e.g., for Next we impose stronger assumptions on the homogeneous bilinear control system (2.7). We require that the control range is a compact and convex neighborhood of the origin and that the accessibility rank condition holds on all of P n−1 , dim LA{Ph(u, ·); u ∈ }( p) = n − 1 for all p ∈ P n−1 . (3.10) Then by Colonius and Kliemann [10, Theorem 7.1.1] there are k 0 control sets with nonvoid interior in P n−1 denoted by P D 1 , . . . , P D k 0 , 1 ≤ k 0 ≤ n. Exactly one of these control sets is an invariant control set.
Remark 3.9
Braga Barros and San Martin [6] use the classification of semisimple Lie groups acting transitively on projective space P n−1 (cf. Boothby and Wilson [5]) to determine the number k 0 ∈ {1, . . . , n} of control sets P D i in projective space (it is either equal to n, n/2, or n/4).
Next we analyze the relations between the control sets for the induced systems on projective space P n−1 and on the unit sphere S n−1 . We will frequently use the following elementary facts that follow from (2.8): Let s 1 , s 2 ∈ S n−1 . If s 2 can be reached from s 1 (for system (3.1)), then −s 2 can be reached from −s 1 . If on P n−1 the point Ps 2 can be reached from Ps 1 , then on S n−1 at least one of the points s 2 or −s 2 can be reached from s 1 .
The proof of the following lemma is modeled after Bacciotti and Vivalda [2, Lemma 3], where controllable systems are analyzed. Lemma 3.10 (i) Let S D be a control set on S n−1 . Then the projection of S D to P n−1 is contained in a control set P D. (ii) Assume that the accessibility rank condition (3.10) on P n−1 holds and consider a control set P D i on P n−1 . Suppose that there is s 0 ∈ S n−1 such that Ps 0 ∈ int ( P D i ) and −s 0 can be reached from s 0 . Then there exists a control set S D on S n−1 containing A := {s ∈ S n−1 |Ps ∈ P D i }.
Proof Assertion (i) is immediate from the definitions. Concerning assertion (ii) it is clear that for all s ∈ A there is a control u such that the trajectory of system (3.1) remains in A for all t ≥ 0. Now let s 1 , s 2 ∈ A. We have to show that s 2 is in the closure of the reachable set O + (s 1 ) for system (3.1). Since Ps 1 , Ps 2 ∈ P D i it follows that s 2 ∈ O + (s 1 ) or −s 2 ∈ O + (s 1 ). In the first case we are done. In the second case it follows that s 2 ∈ O + (−s 1 ), and that, by our assumption, s 0 ∈ O + (−s 0 ). As noted in Sect. 2, P(−s 1 ) = Ps 1 ∈ P D i and Ps 0 ∈ int ( P D i ) imply that Ps 0 can be reached from P(−s 1 ), hence s 0 ∈ O + (−s 1 ) or −s 0 ∈ O + (−s 1 ). We claim that also in the second case one can reach s 0 from −s 1 and in the first case, one has The proof of the next proposition uses arguments from Bacciotti and Vivalda [2, Proposition 2].
Proposition 3.11
If accessibility rank condition (3.10) holds for the induced system on P n−1 , it also holds for the induced system on S n−1 .
Proof Recall that P n−1 = (R n \ {0})/ ∼, where ∼ is the equivalence relation x ∼ y if y = λx with some λ = 0. Furthermore, an atlas of P n−1 is given by n charts (U i , ψ i ), where U i is the set of equivalence classes [x 1 : · · · : x n ] with x i = 0 (the homogeneous coordinates) and ψ i : U i → R n−1 is defined by where the hat means that the i-th entry is missing.
For the sake of simplicity we prove the rank condition for the North Pole of S n−1 given byz 0 = (0, . . . , 0, 1). By assumption, the rank of the Lie algebra of the system on P n−1 is n − 1 on all of P n−1 . Consider the point x 0 = [0 : · · · : 0 : 1] ∈ P n−1 . Thus there exist n − 1 matrices A 1 , . . . , A n−1 in the Lie algebra generated by the system on R n \ {0} such that for the induced vector fields A 1 , . . . , A n−1 in the Lie algebra for the system on P n−1 one obtains that the rank of the family (5)] shows the following formula for the local expression of this family, which has the form A n 1 (z 0 ), . . . , A n n−1 (z 0 ) with z 0 = (0, . . . , 0); let a k 1 (z 0 ), . . . , a k n (z 0 ) denote the n components of A kz0 . Then, for k = 1, . . . , n − 1, So A n k (z 0 ) is the vector whose components are equal to the first n − 1 components of the last column of the matrix A k .
On the other hand, the projections on S n−1 of the linear vector fields for the matrices A 1 , . . . , A n−1 are the vector fields (cf. (3.1)) Thus we get, for k = 1, . . . , n − 1 We get the following result characterizing the relation between the control sets P D 1 , . . . , P D k 0 , 1 ≤ k 0 ≤ n, on projective space and the control sets on the unit sphere.
The set A + is contained in a control set S D and the set A − is contained in a control set It follows that {s ∈ S n−1 |Ps ∈ P D i } ⊂ S D ∪ S D , since P is an open map and P D i ⊂ int ( P D i ). By Lemma 3.10(i) the projections of S D and S D to P n−1 are contained in P D i , hence (3.11) follows. The same arguments with −s 0 instead of s 0 implies that S D = − S D . If S D or S D is an invariant control set, then also P D i is an invariant control set, hence there are at most two invariant control set on S n−1 .
(iii) This is a consequence of assertion (ii).
Recall the following definitions from Colonius and Kliemann [10]. For a solution ϕ(t, x, u), t ≥ 0, of (2.7) the Lyapunov exponent is Observe that the Lyapunov exponents are constant on lines through the origin.
Definition 3.13
For a control set P D in P n−1 the Floquet spectrum is given by Px ∈ int ( P D) and u is piecewise constant τ -periodic for some τ ≥ 0 with Pϕ(τ, x, u) = Px , and for a control set S D in S n−1 the Floquet spectrum is given by x ∈ int ( S D) and u is piecewise constant τ -periodic for some τ ≥ 0 with s(τ, x, u) = x .
Proposition 3.14 If S D is a control set with nonvoid interior on S n−1 that projects to a control set P D in P n−1 , then Proof The inclusion " Fl ( S D) ⊂ Fl ( P D)" is clear. For the converse, consider Px ∈ int ( P D) and a piecewise constant τ -periodic control u with Pϕ(τ, x, u) = Px.
We may suppose that x ∈ S n−1 , hence x ∈ S D or −x ∈ S D. Consider the first case. If Analogously one argues in the case −x ∈ S D.
The following result describes the control sets in R n under the accessibility rank condition on projective space.
Theorem 3.15
Assume that the homogeneous bilinear control system (2.7) satisfies the accessibility rank condition (3.10) on P n−1 . If a control set S D i , i ∈ {1, . . . , k 1 }, on S n−1 satisfies 0 ∈ int ( Fl ( S D i )), then the cone
generated by S D i is a control set with nonvoid interior in R n \ {0}. At most two of the D i are invariant control sets.
Proof By Proposition 3.11, every point in S D i is locally accessible. Hence the first assertion follows from Theorem 3.2, if we can show that assumption (ii) in that theorem holds. The Floquet spectrum over a control set in projective space is a bounded interval, cf. [10, Proposition 6.2.14]. By Proposition 3.14 the same holds true for the Floquet spectrum of Fl ( S D i ). If 0 ∈ int ( Fl ( S D i )), it follows that there are points s + , s − ∈ int ( S D i ), controls u + , u − ∈ U and times σ + , σ − > 0 such that where α + := exp(σ + λ(u + , s + )) ∈ (1, ∞) and α − := exp(σ − λ(u − , s − )) ∈ (0, 1). This verifies assumption (ii) of Theorem 3.2 if we take into account that we may vary σ + and hence α + . Furthermore, every invariant control set D projects to an invariant control set on S n−1 , and here there are at most two invariant control sets. Not all control sets on the unit sphere generate cones that are control sets in R n \{0} as indicated by the following proposition, Proposition 3.18 Assume that the homogeneous bilinear control system (2.7) satisfies the accessibility rank condition (3.10) on S n−1 and let S D be a control set in S n−1 with nonvoid interior. Then the following assertion holds.
If the supremum of {λ(u, x) |s(t, x, u) ∈ S D for all t ≥ 0 } is less than 0 or the infimum is greater than 0, then the cone is not a control set.
Proof Exact controllability to points in the interior of S D implies that for all x, y ∈ R n \ {0} with x x ∈ S D and y y ∈ int( S D) there are α, T > 0 and u ∈ U with ϕ(T , x, u) = α y. Now consider (x, u) with s(t, x, u) ∈ S D for all t ≥ 0. Then in the first case the trajectory in R n satisfies ϕ(t, x, u → 0 and in the second case it satisfies ϕ(t, x, u → ∞. Hence the assertion follows. Remark 3. 19 We refer to Colonius and Kliemann [10] for a discussion when the supremum and the infimum of {λ(u, x) |s(t, x, u) ∈ S D for all t ≥ 0 } coincide with the supremum and the infimum of Fl ( S D), respectively. For dimension n = 2, [10, Theorem 10.1.1] shows that these equalities hold if the accessibility rank condition holds in P 1 . For general n ∈ N suppose that the control range is given by ρ · , ρ ≥ 0, and the following "ρ-inner-pair condition" for the system on S n−1 holds: Then [10,Theorem 7.3.26] implies that for all ρ ∈ (0, ∞) except for at most n − 1 ρ-values the systems with control range ρ · have the property that the equalities for the suprema and the infima hold for all control sets.
Example 3.20 Consider the damped linear oscillator
where ρ ∈ 1, 5 4 . Hence the system equation is given by The eigenvalues of A(u) satisfy det(λI − A(u)) = det λ −1 1 + u λ + 3 = λ 2 + 3λ + 1 + u = 0, and one obtains two real eigenvalues with corresponding eigenvectors (x, λ 1 (u)x) and (x, λ 2 (u)x) , x = 0. Note that λ 2 (u) > 0 if and only if u ∈ [−ρ, −1). Since for all u ∈ [−ρ, ρ] one has λ 1 (u) < λ 2 (u) the projected trajectories in P 1 go from the eigenspace for λ 1 (u) to the eigenspace for λ 2 (u). A short computation shows that there is an open control set P D 1 and a closed invariant control set P D 2 in projective space P 1 given by the projections of resp., where by [10, Theorem 10.1.1] the Floquet spectra are The control sets in P 1 induce four control sets on the unit circle S 1 . For P D 2 one obtains the two control sets S D 2 = − S D 2 . Since u = −1 ∈ (−ρ, ρ) and 0 = λ 2 (−1) ∈ int ( Fl ( P D 2 )), Theorem 3.15 implies that there are two invariant control sets in R 2 \ {0}, they are the cones Next we present a necessary and sufficient condition for controllability on R n \ {0}. The infimal and supremal Lyapunov exponents, cf. (3.12), are resp. The following result improves Colonius and Kliemann [10, Corollary 12.2.6(iii)], where the accessibility rank condition is assumed in R n \ {0}.
Corollary 3.21
Assume that the homogeneous bilinear control system (2.7) satisfies the accessibility rank condition (3.10) on P n−1 . Then it is controllable in R n \ {0} if and only if the induced system on P n−1 is controllable and κ * < 0 < κ.
Conversely, controllability on P n−1 implies by Bacciotti and Vivalda [2, Theorem 1] that S D = S n−1 is a control set. By Theorem 3.15, it follows that R n \{0} is a control set. This implies that for every initial point x = 0 the reachable set O + (x) is dense in R n \ {0}, i.e., approximate controllability holds. For homogeneous bilinear control systems, Cannarsa and Sigalotti [7,Theorem 1] shows that approximate controllability implies controllability in R n \ {0}. This completes the proof.
Remark 3.23
For control systems on semisimple Lie groups, San Martin [23, Proposition 5.6] shows the following result. Let G ⊂ Sl(n, R) be a semisimple, connected, and noncompact group acting transitively on R n \ {0} and let S be a semigroup with nonvoid interior in G. Then S is controllable on R n \ {0} if and only if S is controllable in P n−1 . In this case 0 ∈ (κ * , κ) = int Fl (P n−1 ) .
Equilibria of affine systems
In the rest of this paper we discuss control sets for affine systems of the form (1.1). We begin by analyzing the equilibria.
For each control value u ∈ , an associated equilibrium point of system (1.1) is a state x u that satisfies If for u ∈ there is a solution x u of (4.1) and det A(u) = 0, then every point in the nontrivial affine subspace x u + ker A(u) is an equilibrium. If there is u ∈ with Cu + d = 0, then equation (4.1) always has the solution x u = 0. If det A(u) = 0, then there exists a unique equilibrium of (1.1) given by (4.2) The following simple but useful result shows that for constant control u the phase portrait of the inhomogeneous equation is obtained by shifting the origin to x u .
Proposition 4.1 Consider for constant control u ∈ a solution ϕ(t, x, u), t ≥ 0, of the inhomogeneous equation (1.1) and let x u be an associated equilibrium. Then ϕ(t, x, u)− x u is a solution of the homogeneous equationẋ(t) = A(u)x(t) with initial
The following proposition shows that the affine control system (1.1) is equivalent to an inhomogeneous bilinear system, if there is u 0 ∈ with Cu 0 + d = 0.
Proposition 4.2
Suppose that there is u 0 ∈ with Cu 0 + d = 0 and consideṙ with trajectories denoted by ψ (·, x, v). Then the trajectories ϕ(·, x, u), u ∈ U, of (1.1) Proof One computes for a solution x(t) = ϕ(t, x, u), t ∈ R, of (1.1) We introduce the following notation for the set of equilibria, Consider a sequence u k → v i for some i. If x u k remains bounded, we may assume that it converges to some y ∈ R n . For k → ∞ we find contradicting the assumption of the theorem. It follows that x u k becomes unbounded for k → ∞.
(ii) If = [u * , u * ], u * < u * , the equilibrium set E = {x u u ∈ \ {v 1 , . . . , v r } } consists of at most n + 1 smooth curves having no finite endpoints, with the possible exception of the equilibria corresponding to the minimum and maximum values of u in , i.e., u = u * , u * . If there is more than one curve constituting E, then the finite end points which are the equilibria x u * and x u * must lie on different curves. Hence the assertion also follows in this case. Similarly, the assertion follows for intervals which are unbounded to one side.
The following example is used in Rink and Mohler [21, Example 2] and Mohler [20, Example 2 on page 32] as an example for a system that is not controllable. It illustrates the result above.
Example 4.4 Consider the control system given by
. With this is the inhomogeneous bilinear control system The eigenvalues of A(u) = A + u B are given by 0 One finds λ 2 (u) > 0 for u > 1 2 and λ 1 (u) < 0 for u < − 1 2 . For u ∈ − 1 2 , 1 2 one gets λ 1 (u) > 0 and λ 2 (u) < 0, hence the matrix A + u B is hyperbolic here.
For every u ∈ R, the eigenspace for λ 1 (u) is Diag 1 := {(z, z) |z ∈ R } and the eigenspace for λ 2 (u) is Diag 2 := {(z, −z) |z ∈ R }. For |u| = 1 2 the equilibria are given by Thus we see that The assumption of Theorem 4.3 is satisfied, since for u = ± 1 2 there is no solution to For the asymptotics of the equilibria, Eq. (4.5) shows that (x u , y u ) approach the line Diag 2 for u → 1 2 and the line Diag 1 for u → − 1 2 . In both cases, the equilibria become unbounded. For u → ±∞, one obtains that the equilibria approach (0, − 1 2 ) .
This discussion shows that the set of equilibria for unbounded control u consists of the following three connected branches The equilibria in B 2 and B 3 both approach (0, − 1 2 ) for |u| → ∞; cf. also Mohler [20, Figure 2.1 on p. 33] or Rink and Mohler [21, Figure 1]. The equilibria in B 2 are stable, those in B 3 are totally unstable, and those in B 1 yield one positive and one negative eigenvalue.
Control sets and equilibria of affine systems
The controllability properties near equilibria will be analyzed assuming that the linearized control systems are controllable. This yields results on the control sets around equilibria.
In order to describe the properties of the system linearized about an equilibrium, we recall the following classical result from Lee and Markus [19, Theorem 1 on p. 366]. We apply this result to affine control systems and obtain that the reachable set and the controllable set for an equilibrium are open, if the linearized system is controllable.
Theorem 5.1 Consider the control process in
Proof First we convince ourselves that Theorem 5.1 can be applied to arbitrary equilibria (x 0 , u 0 ) with u 0 ∈ int ( ) instead of (0, 0). In fact, definef (x, u) Then (0, 0) is an equilibrium oḟ 5) and the control value u = 0 is in int( − u 0 ). The solutions ψ(t, 0, u), t ≥ 0, of (5.5) are given by ϕ(t, Hence O − (x 0 ) coincides with the controllable setÕ − (0) of (5.5). The rank condition (5.2) for (5.5) involves For system (1.1) f (x, u) = A(u)x + Cu + d and for an equilibrium x u we find By ( The following proposition shows that the controllability rank condition (5.3) holds generically for controls u ∈ R m if it holds in some u 0 .
Proposition 5.3 Assume that A(u) is invertible for all u ∈ R
, and hence continuous dependence on the initial value shows that ϕ(t, Proof By Proposition 5.5 every equilibrium x u ∈ C is contained in the interior of a control set. Consider two points x u and x v in C. Then x v ∈ O + (x u ). In fact, consider a continuous path from Observe that τ > 0, since by Proposition 5.2, the reachable set On the other hand, y is an equilibrium corresponding to a control in the interior of . Again Proposition 5.2 implies that O − (y) is a neighborhood of y, and hence This contradiction shows that τ = 1 and y = x v . Thus one can steer the system from any point x u ∈ C to any other point x v ∈ C. It follows that C is contained in a single control set D. The same arguments show that, in fact, C is contained in the interior of D.
Remark 5.7
For scalar control, Theorem 4.3 shows that there are at most n + 1 connected components of the set E of equilibria, which consists of at most n + 1 smooth curves. Thus also E 0 consists of at most n + 1 smooth curves which, naturally, are pathwise connected. Hence, under the assumptions of Theorem 5.6, there are at most n + 1 control sets containing an equilibrium in the interior.
In the rest of this section, we relate the controllability properties of system (1.1) to spectral properties of the matrices A(u), u ∈ . Lemma 5.8 Consider the affine system (1.1) and suppose that x u is an equilibrium for a control value u ∈ int( ) satisfying the rank condition (5.3). The variation-of-constants formula applied for x ∈ R n and x u shows that Thus (5.7) implies has negative real part. By (i) and time reversal, Lemma 2.2, the assertion follows.
Remark 5.9
An easy consequence of this lemma is that the system is controllable if there are u, v ∈ with equilibria x u , x v in the same pathwise connected subset of E 0 such that every eigenvalue of A(u) has negative real part and every eigenvalue of A(v) has positive real part; cf. Mohler [20,Main Result,p. 28] for the special case of inhomogeneous bilinear systems of the form (2.6).
The following corollary to Theorem 5.6 shows that there is a control set around the set of equilibria for uniformly hyperbolic matrices A(u), u ∈ . Then the set E = E 0 of equilibria is compact and connected, the set E 0 is pathwise connected, and there exists a control set D with E 0 ⊂ int(D).
Proof First observe that all matrices A(u), u ∈ , are invertible, since 0 is not an eigenvalue. Thus the set E = {x u |u ∈ } of equilibria is compact and E 0 is pathwise connected, since x u depends continuously on u. By Theorem 5.6 there exists a control set containing E 0 in the interior. Since pathwise connected sets are connected the set int ( ) is connected, which implies that also = int( ) is connected, cf. Engelking [14, Corollary 6.1.11]. It also follows that the set E = E 0 is connected.
If condition (ii) of Corollary 5.10 holds with k = 0 or k = n, i.e., if all matrices A(u) are stable or all are totally unstable, the rank condition (iii) for the linearized systems can be weakened.
Corollary 5.11
Let assumption (i) of Corollary 5.10 be satisfied and assume that there are at most finitely many points in int ( ) such that the rank condition (5.3) is violated. Proof As in Corollary 5.10(i) it follows that the set E 0 of equilibria is pathwise connected. Consider equilibria x u , x v ∈ E 0 with u, v ∈ int ( ) and suppose that x u satisfies condition (5.3). Hence there is a control set D u containing x u in the interior. We use a construction similar to the one in the proof of Theorem 5.6: There is a continuous map h : Observe that τ > 0, since x u ∈ int (D u ). If τ < 1, then y := h(τ ) ∈ ∂ D u and y = x w is an equilibrium for some w ∈ int ( ). If w satisfies (5.3), then by Proposition 5.5 x w is in the interior of a control set contradicting the choice of τ . It remains to discuss the case where w violates (5.3).
(i) Since all eigenvalues of A(u) have negative real parts, Lemma 5.8(i) implies that x w ∈ O − (x u ) = R n . Hence one can steer x w (in finite time) into the interior of D u , and by continuous dependence on the initial value, this holds for all x in a neighborhood N (x w ). Note that x w ∈ D u ∩ ∂ D u . Since there are only finitely many points violating (5.3), all points h(s ) with s ∈ (τ, τ + ε) for some ε > 0 satisfy (5.3) and hence they are in a single control set D and hence x w ∈ D . Then all points in the nonvoid intersection N (x w ) ∩ D can be steered into D u . The same arguments show that one can steer points in D u into D , hence D = D u . This contradicts the choice of τ . It follows that τ = 1 and x v ∈ D u . Using We conclude that all equilibria in E 0 are contained in the interior of a single closed control set.
(ii) Since all eigenvalues of A(u) have positive real parts, Lemma 5.8(ii) implies that x w ∈ O + (x u ) = R n . This shows that x w can be reached from x u ∈ int (D u ). Continuous dependence on the initial value shows that all points in a neighborhood N (x w ) of x w can be reached from the interior of D u . Since there are only finitely many points violating (5.3), all points h(s ) with s ∈ (τ, τ + ε) for some ε > 0 are in a single control set D and x w ∈ D . Then all points in the nonvoid intersection N (x w ) ∩ D can be reached from the interior of D u . The same arguments show that some point in int (D u ) can be reached from D , hence D = D u . This contradicts the choice of τ . It follows that τ = 1 and We conclude that all equilibria in E 0 are contained in the interior of a single control set. Remark 5.4 shows for an affine system of the form (1.1) with scalar control satisfying the assumptions of Proposition 5.3 that there are at most finitely many points u where the rank condition (5.3) is violated.
Remark 5.12
Next we provide a sufficient condition for the existence of unbounded control sets.
Theorem 5.13
Consider an affine control system of the form (1.1), let C be a pathwise connected subset of the set E 0 of equilibria of system (1.1) and define (C) = {u ∈ int( ) |x u ∈ C }. Assume that (i) there is u 0 ∈ (C) such that A(u 0 ) has the eigenvalue λ 0 = 0 and Cu 0 + d is not in the range of A(u 0 ); (ii) every u ∈ (C), u = u 0 , satisfies rank A(u) = n and the rank condition (5.3).
Then, there is an unbounded control set D ⊂ R n containing C in the interior. More precisely, for u k ∈ (C) with u k → u 0 for k → ∞, the equilibria x u k ∈ C ⊂ int(D) satisfy for k → ∞ Proof By Theorem 5.6 there is a control set D containing C in the interior. In order to show that D is unbounded, we argue similarly as in the scalar situation in Theorem 4.3. Let u k ∈ (C) converge to u 0 and assume, by way of contradiction, that x u k remains bounded, hence we may suppose that there is x 0 ∈ R n with x u k → x 0 . Then the equalities A(u k )x u k = − Cu k + d lead for k → ∞ to A(u 0 )x u 0 = − Cu 0 + d contradicting assumption (i). We have shown that x u k becomes unbounded for k → ∞. Since Cu k + d → Cu 0 + d, we get On the other hand, every cluster point y ∈ R n of the bounded sequence Theorem 5.13 sheds some light on the relation between controllability properties of affine systems and their homogeneous bilinear parts: By Theorem 3.15 assumption (i) is related to the existence of a control set of the latter system in R n .
We state the following result concerning closed invariant cones (cf. Remark 3.17). This is formulated in the context of semigroup actions. Denote by S aff and S hom the system semigroups of the affine and the homogeneous bilinear control systems given by (1.1) and (2.7), respectively. They correspond to piecewise constant controls (see Appendix A of [10]). The system group of the affine control system is given by the semidirect product G = H R n , where H is the system group of the homogenous bilinear system. The affine group operation is defined by (g, v)·(h, w) = (gh, v+gw) for all (g, v), (h, w) ∈ G, and the affine action of G on R n is given by (g, v) · w = gw + v with (g, v) ∈ G and w ∈ R n using the linear action of H on R n . A set Q ⊂ R n is invariant under S aff and S hom if and only if it is invariant for the affine control system and the homogeneous bilinear control systems, respectively. We get the following relations between invariance of a closed cone for S aff and S hom . Proposition 5.14 Consider an affine control system of the form (1.1) and its homogeneous bilinear part (2.7), and let K be a closed cone in R n .
(i) Suppose that K is invariant for the homogeneous bilinear part and Cu + d ∈ K for all u ∈ . Then K is invariant for the affine control system. (ii) If K is invariant for the affine control system, then it is invariant for the homogeneous bilinear part.
Hence for every v ∈ R n there is λ > 0 such that inf{ g(λw) + v − w w ∈ K } > 0 implying g(λw) + v / ∈ K . This means for the action of S aff that (g, v) · (λw) = g(λw) + v / ∈ K contradicting the invariance of K for S aff .
Remark 5. 15 Jurdjevic and Sallet [17,Theorem 2] shows that controllability of an affine control system without fixed points can be guaranteed if its homogeneous bilinear part is controllable. Furthermore, for Q ⊂ R n let A(Q) be its affine hull. Suppose that Q is invariant for the affine control system. Then [17,Lemma 3] implies that A(Q) is invariant for the affine control system and the set { p i=1 λ i q i | q i ∈ Q, λ i ∈ R with p i=1 λ i = 0, p ∈ N} is invariant for its homogeneous bilinear part.
Finally, we illustrate Theorem 5.6 and Theorem 5.13 by discussing the control sets for two affine systems. Recall that by Theorem 3.15, the existence of a control u 0 ∈ int( ) such that 0 is an eigenvalue of A(u 0 ) is connected with the existence of an unbounded control set of the bilinear systemẋ = A(u)x. Thus the rank condition (5.3) holds in every equilibrium (x u , y u ) with |u| = 1 2 . Next we discuss the control sets for several control ranges given by a compact interval.
-Let = [−1, 1]. Then the connected components of the set E 0 of equilibria are and there are control sets D i with C i ⊂ int (D i ) for i = 1, 2, 3. Since these sets of equilibria are unbounded also the control sets are unbounded. Based on Proposition 4.1, a lengthy argument involving the phase portraits for constant controls shows that one cannot steer the system from D 2 to D 3 or D 1 and from D 1 to D 3 , hence these control sets are pairwise different.
Next we take up the linear oscillator from Example 3.20 and consider an associated affine control system. We will show that there are two unbounded control sets.
Example 5.17
Consider the affine control system given bÿ where ρ ∈ 1, 5 4 and d ∈ R. Hence the system equation has the form For the equilibria with u = −1 we find | 13,144.6 | 2021-11-12T00:00:00.000 | [
"Mathematics"
] |
Evidence Review and Practice Recommendation on the Material, Design, and Maintenance of Cloth Masks
Despite numerous masking recommendations from public health agencies, including the World Health Organization, editorials, and commentaries providing support for this notion, none had examined different homemade masks or demonstrated that perhaps not all cloth masks are the same. This article aims to provide evidence-based recommendations on cloth-mask materials, its design, and, importantly, its maintenance. Articles were obtained from PubMed and preprint servers up to June 10, 2020. Current evidence suggests that filtration effectiveness can range from 3% to 95%. Multiple layer (hybrid) homemade masks made from a combination of high density 100% cotton and materials with electrostatic charge would be more effective than one made from a single material. Mask fit greatly affects filtration efficiency, and adding an overhead knot or nylon overlay potentially provides the best fit for cloth masks. There is a paucity of evidence for masks maintenance as most studies are in the laboratory setting; however, switching every 4 hours as in medical masks and stored in dedicated containers while awaiting disinfection is recommended. Outside of these recommendations to improve the effectiveness of cloth masks to reduce infection transmission, there is a need for countries to set up independent testing labs for homemade masks made based on locally available materials. This can use existing occupational health laboratories usually used for accrediting masks and respirators.
W orldwide, face masking has become a global phenomenon associated with the coronavirus disease pandemic. This is even more important with countries removing lockdowns. Masking itself has been a common sight in many Asian countries who had faced the 2003 severe acute respiratory syndrome coronavirus (SARS-CoV) pandemic and several other recent epidemics even before the COVID-19 pandemic. 1,2 During the pandemic, an online survey at the end of January 2020 in Hubei, China, reported that up to 98% of respondents were using masks outdoors. 3 Recently, the World Health Organization and the US Centers for Disease Control and Prevention (CDC) have reversed their recommendation to advocate for universal masking with the public recommended to use homemade cloth masks and not medical masks. 4,5 The use of cloth masks to prevent infection is nothing new. It had been used for decades in surgical theaters before being replaced by the more effective medical masks and are still in use in many low-resource settings. 6 Prior to the current public masking recommendation, the CDC had provided recommendation for health professionals on the use of homemade masks (scarfs, bandana) as a last resort in settings where face masks are not available. 5 Even so, despite numerous masking recommendations from public health agencies, editorials and commentaries providing support for this notion, none had examined the different homemade masks or demonstrated that perhaps not all cloth masks are the same. 7,8 This is important because the materials used, how they are worn, and their maintenance play an important role in determining whether they'll be effective in reducing risk of transmission or may potentially bring even harm to the wearer. This article aims to examine current evidence on cloth masks and, with theoretical rationales, provide some recommendations on what might be an effective cloth mask for the public.
METHODS
Articles for the narrative review were searched from PubMed and preprint server (medRxiv) and last updated on June 10, 2020. We also hand searched references of previous reviews on masking and references of the included studies. Studies of any type were included. We only included studies that were published in full and in the English language.
Cloth-Mask Materials
While a recent study has shown that surgical masks are effective in preventing transmission of human coronaviruses found in exhaled breath, there have been no head-to-head studies for surgical versus cloth/homemade masks' effectiveness against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative virus of COVID-19 in the community. 9 However, there have been laboratory experiments conducted to assess particle filtering effectiveness of cloth masks based on the percentage of particles penetrating the material (particle penetration) and a few studies examining the effectiveness of cloth masks against influenza and other coronaviruses.
A recent study by Ma et al. comparing the efficacy of N95 masks, medical masks, and homemade masks (made from 4 layers of kitchen paper and 1 layer cloth) against avian influenza to mock the coronavirus reported that 99.98%, 97.14%, and 95.15%, respectively, of the virus in aerosols made by a nebulizer were blocked. 10 Another study by van der Sande similarly found that a homemade mask (made from tea cloth) decreases viral exposure and infection risk. Even so, it provides about half as much the protection (measured as an inverse of particle leakage) as surgical masks, which the authors suggest is due to its imperfect fit. 11 The pillowcase and 100% cotton T-shirt have also been reported to be the most suitable material for an improvised face mask based on a study of 10 materials by Davies et al. 12 They reported that, for influenza, homemade masks have one-third the effectiveness of medical masks, although homemade masks were able to significantly reduce the number of microorganisms expelled compared to no protection. 12 However, a cluster randomized trial of cloth masks (2 layers made of cotton) compared with medical masks among health care workers conducted by MacIntyre et al. reported the particle penetration of cloth masks and medical masks as 97% and 44%, respectively. The study also reported that there was a significantly higher risk of influenza like illness (adjusted RR = 6.64, 95% CI: 1.45 to 28.65) and laboratory-confirmed virus (adjusted RR = 1.72, 95% CI: 1.01 to 2.94) in the cloth mask group compared to medical mask group. 6 Not all cloth mask materials provide the same protection from particle penetration, which may potentially be 1 explanation to the differing findings. A laboratory test conducted among 44 different masks of 5 typesyellow sand, quarantine, medical, general masks, and handkerchiefsshowed great variability in particle penetration characteristics. An interesting finding with regard to general masks is the differing particle penetration between those made from nonwoven and cotton. Nonwoven masks resulted in significantly less particle penetration compared to cotton masks with mean particle penetration of 53% versus 70%, respectively, based on particle penetration measurements performed using the Korean Food and Drug Administration (similar to European Union test protocol) method. Furthermore, handkerchiefs made of cotton and gauze even up to 4 layers thick reported a particle penetration of 87% and 97%, respectively. 13 Similarly, another laboratory test conducted by the US National Institute for Occupational Safety and Health laboratory reported that cloth masks and other fabrics tested against polydisperse and monodisperse NaCl aerosols (20-100 nm) showed 40% to 90% particle penetration, with the best material being a 100% cotton sweatshirt compared with a mix of cotton and polyester or 100% polyester. 14 The absence of electrostatic charge in the fabric tested in the study may play a role in its high particle penetration. Interestingly, these particle penetration levels were reported to be similar to some commonly used US Food and Drug Administration approved surgical masks and unapproved dust masks that have particle penetration levels of 51% to 89% in similar testing. It must also be noted that previous studies have shown that even N95 masks, while able to filter 95% of particles greater than 3 nm in size, are marginally beneficial against particles < 2.5 μm in diameter as tested against in this study.
This was supported by the result of Shakya et al.'s study, which shows that the filtration effectiveness of cloth masks was better for particles larger than 1 μm in diameter (64-94%) compared with those smaller than 1 μm (44-93%). 15 A feature they report to perform better is cloth masks with exhalation valves and conical in shape to fit the face, although it must be noted that this study aims to evaluate the effectiveness in blocking toxic fumes not biological particles.
A recent laboratory study also showed that higher density cotton (600 threads per inch) prevents penetration of particles < 300 nm by > 65% and penetration of particles > 300 nm in diameter by > 90%. 16 Interestingly, the quilt, a common household item made of 90% cotton-5% polyester-5% other fibers was found to perform even better blocking penetration of > 80% for particles < 300 nm and > 90% for > 300 nm. Combining high density cotton with silk, 2 layers of chiffon (90% polyester, 10% spandex), and 1 layer of flannel (65% cotton, 35% polyester) also provided a similar effectiveness. Although they were all slightly inferior to N95 in filtering particles above 300 nm, they were superior for particles < 300 nm. 16 There are several key insights that can be gained from the above studies on mask materials, including the following: 1. Nonwoven materials and, in some cases, high density 100% cotton seem to provide the best particle filtration capacity, especially for larger-sized particles. 2. Materials with electrostatic charge (such as silk, polyester, and other synthetic fibers) are superior to the above materials for filtering smaller-sized particles (< 300 nm). 3. There's limited evidence that simply adding the number of layers from the same material may not provide a significant increase in effectiveness.
Hence, it can be argued as 1 study has shown that a multiple layer (hybrid) homemade mask made from a combination of high density 100% cotton and materials with electrostatic charge would be more effective than a mask made from a single material. Prolonged use of masks is not always suitable for everyone, breathability considerations also come into play when preparing one's own mask, and those with chronic lung disease, such as chronic obstructive pulmonary disease, may consider using a mask with fewer layers to improve breathability and adherence to use in daily life. Furthermore, for the studies against influenza, we must note that, while both SARS-CoV-2 and influenza are respiratory viruses, they belong to different families and hence have different transmissibility, which will limit the generalizability of those findings to the current pandemic. Even so, they provide some signal of benefit in reducing penetration of viral particles.
Mask Design
Respirators, such as N95 and FFP masks, are protective not only due to their particle filtering capabilities, but also due to their fit, unlike surgical masks and cloth masks that do not fit tightly. A recent study has showed that mask filtration efficiency can fall by up to 60% when improperly fitted. 16 With this regard, design factors can play a role in improving fit as shown in the study by Dato et al., which showed that a fit factor of 67 (N95's fit factor is 100) can be achieved with cloth masks. 17 Here, a cloth masks were made with 3 knot points -2 behind the head and 1 over the head with 8 inner layers in the center piece. 17 A recent study by Mueller et al. 18 also showed the potential to improve fit through the use of a nylon stocking layer over the masks to cover all the edges. This reduces the flow of air around the edges and was reported to improve the particle filtration efficiency by 15% to 50%. Find below the schematic for a mask design that we recommend based on these studies (Figure 1).
Cloth masks during the pandemic have the main goal of source control, reducing transmission of droplets containing viral particles. Even so, these particles remain larger than the ultrafine particles (< 100 nm in size) from air pollution or toxic fumes. 1 Hence, it can be argued that a perfect fit is not the main aim of cloth masks as that itself is not a feature of medical masks. However, a better fit and less leakage will provide greater potential to reduce aerosol penetration; simple adjustments to the design of homemade masks and adding a nylon stocking layer can aid in this. This is especially more important in light of recent evidence suggesting airborne transmission of SARS-CoV-2, especially in enclosed spaces. 19 Importantly, the public must be made to understand the importance of a mask covering until the bridge of one's nose and chin, not having it hanging and only covering the mouth.
Mask Maintenance
There is a paucity of evidence with regard to mask maintenance. Most studies reported previously are conducted in the laboratory setting where their use is limited to 1 to 2 hours and only once. The only community study conducted in Vietnam 6 in health care workers compared the use of cloth masks (1 per day) with another arm wearing medical masks replaced every 4 hours. This study design may be 1 reason to explain the differing results in this study and the short laboratory experiments. A previous study has shown that a longer duration of mask use (> 6 hours) resulted in a significantly higher percentage of masks containing viral particles. 20 In such a scenario, the mask becomes a fomite and not prevent but may transmit infection.
Reflecting on this, it is our view that one should have multiple masks and that cloth masks should be worn and changed for much shorter periods of time or at least the same with medical mask guidelines, which is ideally a mask every 4 hours or until damp. Considering that this will be used by the community, an easy reminder to change masks is to have a mask change during the mealtime when it must be taken off. This would help promote adherence to this recommendation. As what we are discussing here is about community use, not in health care workers, people will not have the discipline to wear masks properly for long periods of time. It can be argued that they may not have to. There is rationale to recommend its wear only in public, especially during commuting to an office where they can have it removed as long as they are physically distanced from others and have sufficient ventilation, preventing a buildup of virus-laden droplets in the air.
One other critical aspect is what happens with the mask after the user takes it off, say, after lunch, when they are ready for the second mask for the day? A surgical mask would be thrown away at this stage as they are recommended for use for only 4 hours. However, this cloth mask will not. Therefore, together with a mask, people will need to have a container where they would store the contaminated mask before they can wash them. This is very practical, maybe too obvious, but we suspect that many people will not worry about this and put them into the pocket, handbag, keep it hanging on their neck, and so on, carrying with them a reservoir of the virus. Plastic containers or Ziploc bags, which are disposable or easy to disinfect, would be possible options for storage.
Being reusable, cleaning the mask after use is another important aspect that should not be taken for granted. How should these masks be best cleaned? Alcohol, while an effective disinfectant, has been shown in previous studies to reduce effectiveness of masks. A study by Martin Jr and Moyer et al. showed that cleaning respirators (N95, N99, R95, P100) by dipping them into isopropanol (alcohol rub) for 15 seconds and then dried resulted in a substantial increase in penetration to up to 65%. 21 This is a result of isopropanol reducing or eliminating any electrostatic charge on the fibers, which, discussed previously, has filtering properties.
While there have been no studies examining different cleaning mechanisms and eradication of viral particles, the CDC recommends washing using hot water 70-80°C for 10 minutes using detergent. 22 Even so, other studies have shown that temperatures above 50°C are significantly able to reduce microorganisms even without the use of detergents. 23 These temperatures are the rationale to follow in cleaning homemade masks, based on reports during SARS-CoV that they are eradicated at temperatures above 56°C for 15 minutes or when exposed to humidity above 95%. Hence, exposure to hot water for 15 minutes can also be a readily accessible way to clean the masks. 24 Outside of washing, drying with the sun's ultraviolet (UV) exposure has been proposed as a way to disinfect. However, while previous studies have shown that UV rays helped inactivate SARS-CoV, it was only UVC that provided the required germicidal properties, which is not present from sun rays as they're filtered by the atmosphere. 25
FUTURE IMPLICATIONS
Outside of these recommendations, to improve the effectiveness of cloth masks to reduce infection transmission, there is a need for countries to set up independent testing labs for homemade masks made based on locally available materials. This can use existing occupational health laboratories usually used for accrediting medical masks and respirators.
The COVID-19 pandemic is a time to push for innovation in the development of personal protective equipment for health care workers and the community. While recommending that universal masking is something to support, the use of masks, which may previously appear mundane and low tech, requires proper design, wear, and maintenance for it to be effective enough to support reducing transmission during the pandemic. | 3,984.4 | 2020-09-02T00:00:00.000 | [
"Physics"
] |
Characterization of iron speciation in urban and rural single particles using XANES spectroscopy and micro X-ray fluorescence measurements : investigating the relationship between speciation and fractional iron solubility
Soluble iron in fine atmospheric particles has been identified as a public health concern by participating in reactions that generate reactive oxygen species (ROS). The mineralogy and oxidation state (speciation) of iron have been shown to influence fractional iron solubility (soluble iron/total iron). In this study, iron speciation was determined in single particles at urban and rural sites in Georgia USA using synchrotron-based techniques, such as X-ray Absorption Near-Edge Structure (XANES) spectroscopy and microscopic X-ray fluorescence measurements. Soluble and total iron content (soluble + insoluble iron) of these samples was measured using spectrophotometry and synchrotron-based techniques, respectively. These bulk measurements were combined with synchrotron-based measurements to investigate the relationship between iron speciation and fractional iron solubility in ambient aerosols. XANES measurements indicate that iron in the single particles was present as a mixture of Fe(II) and Fe(III), with Fe(II) content generally between 5 and 35 % (mean: ∼25 %). XANES and elemental analyses (e.g. elemental molar ratios of single particles based on microscopic X-ray fluorescence measurements) indicate that a majority (74 %) of iron-containing particles are best characterized as Al-substituted Fe-oxides, with a Fe/Al molar ratio of 4.9. The next most abundant group of particles (12 %) was Fe-aluminosilicates, with Si/Al molar ratio of 1.4. No correlation was found between fractional iron solubility (soluble iron/total iron) and the abundance of Alsubstituted Fe-oxides and Fe-aluminosilicates present in single particles at any of the sites during different seasons, suggesting solubility largely depended on factors other than differences in major iron phases.
Introduction
Iron is an important component in atmospheric aerosols due to its potential impacts on human health (Smith and Aust, 1997).Adverse health effects associated with aerosols, such as cell and DNA damage, can stem from toxic levels of reactive oxygen species (ROS; e.g.hydrogen peroxide, hydroxyl radical, superoxide anion and organic peroxides, etc.) that form as a consequence of redox cycling of trace metals (Kelly, 2003;Vidrio et al., 2008).In comparison to other trace metals, iron has been reported as a significant source of ROS via metal-mediated pathways (Shafer et al., 2010;Smith and Aust, 1997;Zhang et al., 2008).The role of metals in adverse health impacts associated with aerosols depends largely on the fraction of total metal content that is readily soluble (Costa and Dreher, 1997;See et al., 2007;Valavanidas et al., 2008), thus, primary factors and mechanisms that alter iron aerosol solubility must be understood to further link aerosol iron to adverse health effects.
A growing body of knowledge has emerged on various control factors that impact iron aerosol solubility.Soluble iron in aerosols varies between 0 to 80 % of total iron, showing no general trend with total iron concentration (Mahowald et al., 2005).While several chemical mechanisms and physical particle properties have been shown to influence iron solubility, there is still significant uncertainty on the primary factors that control fractional iron solubility (Baker and Croot, 2010).Modeling, laboratory and field studies have suggested that iron particles in dust may undergo atmospheric transformations (e.g.acid-processing) that may enhance fractional iron solubility (Meskhidze et al., 2005;Shi et al., 2009).On the other hand, Baker and Jickells (2006) observed no relationship between atmospheric acid processing and iron solubility in coarse and fine crustal particles in a marine environment, but instead observed a relationship between increasing fractional iron solubility and decreasing dust mass concentration, which was likely accompanied by decreasing particle size.In this particular study, the observed increase in fractional iron solubility was attributed to the large surface area available for iron dissolution that is characteristic of small particles (e.g.large surface area to volume ratio).However, Shi et al. (2010) later demonstrated that differences in particle size alone cannot explain the increase in fractional iron solubility observed in Baker and Jickells (2006), suggesting that other processes (e.g.acid-processing or mixing with other anthropogenic particles) may play a more dominant role or work synergistically with particle size to promote fractional iron solubility.A few recent laboratory studies have observed a strong relationship between iron speciation and fractional iron solubility in crustal and industrial fly ash particles (Journet et al., 2008;Cwiertny et al., 2008;Schroth et al., 2009).Cwiertny et al. (2008) showed that Fe(II)-containing solid phase minerals may contribute to a significant portion of soluble iron in crustal sources.In addition, Schroth et al. (2009) showed that soluble iron content from industrial combustion sources, comprised mainly of iron sulfates, was significantly greater (∼80 % of total iron) than the soluble content of crustal particles (∼0.04-3 % of total iron), which were mainly comprised of iron oxides and silicates.These results are also consistent with aerosol data from a field study in Korea that showed enhanced fractional iron solubility in anthropogenic combustion sources rather than crustal sources (Chuang et al., 2005).However, there was insufficient data to determine whether unique speciation or acid-processing of iron in the combustion particles caused enhanced solubility.In addition, other studies have shown a positive relationship between organically-complexed iron (iron oxalate complexes) and fractional iron solubility (Paris et al., 2011). Furthermore, Furukawa et al. (2011) demonstrated that a majority of oxalate in ambient samples from Japan exists as metal complexes.Although the relationship between iron speciation and fractional iron solubility has been established in source particles (e.g.crustal and industrial), it is not clear in ambient particles.A comprehensive knowledge of iron speciation in relation to solubility in urban and rural aerosols would help to further understand its association with public health.
Relatively few analytical tools are available to provide detailed characterization of iron, which are described in detail by Majestic et al. (2007).Typically, studies of aerosols rely upon chemical extractions or spectroscopic techniques that provide oxidation and/or mineralogy information on bulk properties of iron in a sample.Spectrophotometry (e.g.ferrozine) and high performance liquid chromatography (HPLC) have been used to quantify Fe(II) and Fe(III) in bulk aerosol samples, but yield little information on mineral-ogy (Johansen et al., 2000;Zhuang et al., 1992).Mossbauer spectroscopy has been successfully used to directly characterize the oxidation state and mineralogy in aerosol samples (Hoffmann et al., 1996); however, collection of aerosol over a several month period is required to obtain sufficient mass for analysis (∼1 g).Recent innovations in synchrotron-based Xray absorption spectroscopy, specifically X-ray Absorption Near Edge Structure (XANES) and Extended X-ray Absorption Fine Structure (EXAFS) spectroscopies, have made it possible to explore both the oxidation state and mineralogy of iron.These methods require minimal sample preparation and are capable of single particle analysis.XANES and EX-AFS have been widely used to probe iron speciation in soil samples (Marcus et al., 2008;Prietzel et al., 2007).Werner et al. (2007) recently extended EXAFS to atmospheric aerosols to identify oxidation state and mineralogy of chromium in urban California.A few studies have demonstrated the feasibility and benefits of synchrotron-based X-ray spectroscopic techniques using a low energy X-ray beam for the analysis of iron in aerosol samples, but primarily focused on oxidation state characterization (Majestic et al., 2007;Takahama et al., 2008).In this study, particles collected on Teflon filters from urban and rural sites were investigated using synchrotron-based methods, XANES and microscopic X-ray fluorescence, to characterize the oxidation state, elemental association and mineralogy of single iron-containing particles.Soluble iron was quantified using the ferrozine method (Stookey, 1970) to investigate the link between speciation and solubility properties.
Sample collection and storage
Iron particles collected on Teflon filters (Whatman, Piscataway, New Jersey: 47 mm-diameter, 2 µm pore size) were analyzed using XANES and microscopic X-ray fluorescence.Twenty-four hour integrated PM 2.5 filters were collected during different seasons at three urban sites and one rural site (Table 1) for the ongoing Assessment of Spatial Aerosol and Composition in Atlanta (ASACA) air quality study (Butler et al., 2003) and used in this analysis.Breifly, ambient air at a nominal flow rate of 16.7 l min −1 was pulled through a cyclone (URG, Chapel Hill, North Carolina USA), selecting for particles with an aerodynamic diameter less than 2.5 µm (PM 2.5 ), then directed through a series of two annunlar glass denunders (URG, Chapel Hill, North Carolina USA), removing acidic and alkaline gases.The particles were then collected onto the Teflon filter (Whatman, Piscataway, New Jersey: 47 mm-diameter, 2 µm pore size).Samples were stored in sealed polyethylene bags in a dark freezer (∼ −20 • C) immediately after collection and were analyzed within 1 to 11 months.Because iron is non-volatile, sampling artifacts are likely associated with changes in iron oxidation state during sample storage.Majestic et al. (2006) studied this specific artifact in aerosol samples and observed minimal Fe(II) loss on samples stored in a dark freezer for periods up to 6 months.
In addition, Takahama et al. (2008) found no evidence for significant Fe(II) loss in samples stored in freezing temperatures over extended periods of time (>1 year).Although Fe(II) loss due to chemical conversion is possible on these samples, it is not expected to be significant based on the sample storage time and conditions employed in this study.Before XANES and solubility analysis, filter samples were cut with ceramic scissors into half portions with each portion used for either synchrotron-based analyses or fractional iron solubility measurements.
Single particle analysis: synchrotron-based X-ray spectroscopy
Synchrotron-based X-ray spectroscopy is based on the principle that every element has characteristic absorption edges that correspond to the binding energy of electrons in individual quantized shells (e.g.K, L 2 , and L 3 ).In this technique, incident X-rays of sufficient energy bombard atoms, ejecting the electrons from an electron shell.Subsequently, an outer shell electron may relax into the vacated position, emitting a characteristic fluorescence signal.K-edge XANES spectroscopy, used in this study, specifically explores the absorption edge associated with the innermost, K-shell electron.
The ejected electrons of the innermost K-shell interact with neighboring atoms.These interactions are influenced by the type, oxidation state and structural arrangement of atoms in a particle and are reflected in XANES spectra (Ingall, 2011).Thus, XANES spectra provide information on both oxidation state and the mineralogical structure associated with the element of interest.A total of 221 iron-containing particles deposited on the Teflon filters were analyzed on the 2-ID-D beamline at the Advanced Photon Source at Argonne National Laboratory in Argonne, Illinois, USA.The 2-ID-D beamline uses an energy dispersive Si-drift detector (Vortex EM, with a 50 mm 2 sensitive area, and a 12.5 µm Be window; SII NanoTechnology, Northridge CA, USA) to measure X-ray fluorescence of the sample.All measurements were conducted under a helium atmosphere in order to minimize absorption and fluorescence artifacts caused by low-Z elements in air.A randomly selected area of each filter sample (∼0.5 cm 2 ) was placed over a slot of an aluminum sample mount for direct spectroscopic analysis of the iron particles on the filter.The sample was initially analyzed in microscopic X-ray fluorescence mode to identify regions on the filter with detectable iron concentrations (e.g.iron-containing particles).In this mode, a monochromatic X-ray beam with a diameter of ∼400 nanometers was scanned over a filter area (typically ∼40 × 40 µm) at a step size of 0.4 µm and 0.4 s dwell to produce an elemental distribution map of the filter.These maps were produced by setting the X-ray energy to 7200 eV, which allowed for the collection of K-edge X-ray fluorescence data on elements with masses from aluminum to iron (Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, and Fe).The fluorescence data was converted into concentration data (µg cm −2 ) for each element using a calibration with National Bureau of Standards (NBS) reference material.Minimal interference was caused by PTFE and Zeflour filters as seen in Fig. 1 (low background signal).In addition to locating iron-containing particles, calibrated data from these maps was used to characterize the association of other elements with iron.An energy scan (e.g.XANES analysis) was subsequently collected for iron-containing particles identified in microscopy mode (typically 30 iron-containing particles/filter).The X-ray energy scale was calibrated to the iron K-edge (7112.0eV) using an iron metal foil before XANES measurements were performed.The incident X-ray energy was varied from 7090 to 7180 eV in 0.5 eV increments using a monochromator for a 0.5-3.0s dwell to produce an energy scan near the iron Kedge of a given iron-containing particle.
XANES spectra analysis using ATHENA software (2.1.1)
ATHENA software (version 2.1.1)was used to process the raw energy spectra.Individual energy scans were smoothed using a three-point algorithm for 10 iterations.The energy scans were subsequently normalized using the edge step normalization option to avoid mathematical discrepancies caused by directly dividing the fluorescence signal of incident X-ray beam by the signal in the upstream ionization chamber.The pre-edge centroid of the XANES spectra was the primary spectral feature used to determine oxidation state.The pre-edge centroid position was only determined from high intensity spectra (>5000 intensity counts: 103 spectra) to avoid any interferences caused by the low signal to noise ratio in low intensity spectra.The pre-edge feature was normalized by subtracting the pre-edge absorption from the background absorption, calculated by interpolating a cubic spline equation through the absorption 1 eV before and after the pre-edge feature.A Gaussian equation was fit to the normalized pre-edge feature using peak fitting program in Igor software (version 6.1) to determine the pre-edge centroid position.Figure S1 provides a detailed demonstration on how the pre-edge centroid feature was extracted in this study.In addition, XANES energy scans of a wide range of Fe(II) and Fe(III) minerals (augite, pyrite, iron (II) oxalate, iron (II) sulfate, goethite, hematite, iron (III) sulfate, iron (III) sulfate) were collected at the 2-ID-D beamline under similar sampling conditions as ambient sample analysis during February 2010.Table S1 provides a detailed description (classification and origin) of each iron mineral standards.Powder from each iron mineral standard was mounted on aluminum sample stick using double-sided tape for analysis.Similar pre-edge centroid analysis was applied to XANES standard data of Fe minerals to compare to ambient sample data.Oxidation state was determined by the relationship between oxidation state and pre-edge centroid position.In this study, a linear equation was interpolated through Fe(II) (augite, pyrite, iron (II) sulfate, iron (II) oxalate) and Fe(III) (goethite, hematite, iron (III) oxalate and iron (III) sulfate) mineral data with the mean pre-edge centroid position of Fe(II) and Fe(III) minerals representing 0 % Fe(III) and 100 % Fe(III), respectively.The pre-edge centroid position determined from single particles was converted to % Fe(II) content using this interpolation, using Eq.(1).Several studies have used a similar approach to convert pre-edge centroid position of K-edge XANES spectra of octahedralcoordinated Fe minerals into % oxidation state (Bajt et al., 1994;Wilke et al., 2001).
Fractional iron solubility analysis
Soluble iron on the filter samples was measured using the ferrozine technique by Stookey (1970), based on the absorption of light by the Fe(II)-ferrozine complex at 562 nm to quantify Fe(II) in solution.A DTMini-2 equipped with a dual deuterium and tungsten halogen bulb (Ocean Optics: Dunedin, Florida, USA) provided light in the UV/VIS range (200-800 nm), and a USB2000 spectrophotometer (Ocean Optics: Dunedin, Florida, USA) was used for light absorption measurements.A flow-through 100 cm Liquid Waveguide Capillary Cell (LWCC) (World Precision Instruments: Sarasota, Florida, USA) provided a long liquid absorption path length to enhance measurement sensitivity.The spectrophotometer was calibrated using five ammonium Fe(II) sulfate standards ranging from 0 to 20 Fe(II) ppb liquid concentration (typical r 2 = 0.9999) before and after soluble analysis.The deionized water sample leach used in this study showed minimal interference with the ambient sample (<1 % measurement interference).
Sample preparation and analysis used in this study are similar to the protocol described by Majestic et al. (2006) for the analysis of soluble iron on 24-h integrated filter samples.In our study, one half of the filter sample was placed in an acid-cleaned 30 ml amber Nalgene bottle and was subsequently diluted by 15 to 20 ml of de-ionized water (>18.0M ).PM 2.5 was extracted into solution via 30 min of ultra-sonication.A 10 ml aliquot of the extracted sample was filtered through a 0.45 µm PTFE filter (Fisher Scientific: Pittsburgh, Pennsylvania, USA) to remove insoluble particles (>0.45 µm diameter) from the solution.Ferrozine (5.1 mM) was added to the sample aliquot (100 µl ferrozine/10 ml sample) and pulled through the LWCC after 10 min of incubation time.Light absorption was immediately measured at 562 nm (max light absorption of Fe(II)-Ferrozine complex) and 700 nm (background measurement) to yield a 10-min operationally-defined soluble Fe(II) measurement.Hydroxylamine (HA) was subsequently added to the remaining filtrate (100 µl HA/10 ml sample) to reduce soluble Fe(III) to Fe(II).After 10 min of incubation time, the light absorption measurements were repeated following the same procedure as the Fe(II) measurements, yielding the total soluble iron (Fe(II) + Fe(III)) content of the filtrate.Fe(III) concentration was determined by subtraction of the Fe(II) soluble concentration from the total soluble iron concentration.The method limit of detection is estimated to be 0.11 ng m −3 (Majestic et al., 2006), which is well below soluble iron concentrations typically observed in urban aerosols.
Bulk total iron concentration was determined by micro Xray fluorescence measurements.In this approach, total iron measured on a given elemental map (typically 40 × 40 µm) was multiplied by the total sample area divided by the elemental map area to determine the total iron concentration of the sample.When 2 or more elemental maps were collected for a given sample, the average of the total iron concentration from all elemental maps was used to determine the concentration of the sample using micro X-ray fluorescence measurements.Uncertainty was determined by the variability of total iron measured on different elemental maps for the same filter sample.In this study, there was moderate uncertainty (20-55 % standard deviation) associated with this measurement approach.These uncertainties were predominately associated with an uneven distribution of iron-containing particles on the filter.Fractional iron solubility was determined by normalizing soluble iron to total iron concentrations (e.g.soluble iron/total iron).The uncertainty in fractional iron solubility was estimated for each filter sample by propagating the error associated with soluble iron (1-3 %) and total iron (20-55 %) measurements.
Identification of iron-containing particles
Using microscopic X-ray fluorescence, several areas were scanned on each urban and rural filter (1-4 maps per filter) to map out the spatial distribution and concentration of el-ements from aluminum to iron (Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, and Fe), referred to as elemental maps.Ironcontaining particles were identified in this analysis and were subsequently analyzed using XANES spectroscopy.In addition, the elemental maps provided data on the elements that were associated with iron in each particle.The combination of XANES spectra and microscopic X-ray fluorescence were used to characterize mineralogy.Figure 1 shows an example of iron, aluminum, and silicon elemental maps from the South Dekalb winter (11/11/08) filter.The fourth map presented in Fig. 1 shows the combined signal of all 3 elemental maps, indicating that both aluminum and silicon were associated with certain iron-containing particles in this sample.
Iron oxidation state and mineralogy
The pre-edge centroid position is the primary XANES spectra feature used to determine oxidation state and coordination chemistry of a given iron particle.It has been widely used to study iron in common minerals in soils (Prietzel et al., 2007;Wilke et al., 2001) and continental shelf particles in the ocean (Lam and Bishop, 2008).The energy of the pre-edge centroid position shifts anywhere from 1.4 to 3 eV for a change of one valence electron (e.g.Fe(II) to Fe(III)).
The pre-edge centroid position was calculated for 103 particles (e.g.particles with high intensity spectra, >5000 raw counts) from the filter samples.Figure S2 shows the distribution of pre-edge centroid positions for single iron-containing particles from our samples.The pre-edge centroid position varied by 2.05 eV among filter samples, ranging from 7112.75 to 7114.8 eV, with an average of 7114.0 ± 0.3, indicating significant oxidation state variability among our samples.The majority of the pre-edge centroid data for the urban and rural Fe particles falls between the centroid positions observed for Fe(II) and Fe(III) minerals (Fig. S2). Figure 2 shows corresponding percent Fe(II) to total Fe of single particles, based on pre-edge centroid position, on each of the 8 filters identified by sampling site and season.The Fe(II) fraction in single particles from both urban and rural sites was generally between 5 and 35 %, with the majority of the particles consisting of roughly 25 % Fe(II).The rural site (Fort Yargo) during the winter having a much higher Fe(II) content than the other filters, having a median Fe(II) content of 49 %.In addition, a few particles (e.g. 6 out of 103) had much lower pre-edge centroid positions (7112.75-7113.15eV)compared to the average of the entire dataset (7114.0eV), indicating iron in these particles was 100 % Fe(II).The Fe(II) content of single particles in this study is greater than those observed by Takahama et al. (2008), who showed a majority of marine and urban Fe aerosols exists as mixed-oxidation state agglomerations and surface-reduced particles, containing less than 10 % Fe(II).The results presented in our study compared to those of Takahama et al. (2008) suggests that large differences in iron redox state may characterize iron collected in different regions and seasons.
XANES spectra were similar for most of the Fe particles analyzed, regardless of season or site.Figure 3a shows a XANES spectra observed for a typical oxidized and reduced particle (100 % Fe(II) content, determined by pre-edge centroid position as seen in Fig. 3b, solid red line) observed in this study.Although the reduced particle shows a strong decreased shift in pre-edge and K-edge peak position, the shape of the spectra is similar to that of oxidized particles.Figure 4 shows the XANES spectra of a typical Fe particle (Sample 1, dashed blue spectra) observed in this study compared to the XANES spectra of several Fe(II) and Fe(III) compounds.The XANES spectra, for the most part, closely follows the spectra of iron oxides (e.g.goethite and hematite, blue solid spectra) and lacks a resemblance to other classes of Fe minerals, such as silicates (augite and biotite), sulfides (pyrite), organics (Fe(II) and Fe(III) oxalate), and sulfates (Fe(III) sulfate), suggesting that the majority of Fe in urban aerosols is iron oxides.Further separation into specific Fe oxides was difficult, since differences in spectral features amongst this mineral class are very subtle.Most of the XANES spectra of the reduced particles follow the spectra of iron oxides with a shift in edge position; however, a few (2 out of 13) spectra of "more reduced" (pre-edge centroid position <7113.6eV)particles show a strong resemblance to silicates (e.g.biotite) shown in Fig. 4. The presence of Fe(II) (based on pre-edge centroid position) in iron-containing particles that appear to be iron oxides may suggest the presence of surface reduced species.
Elemental composition of iron-containing particles: insight on mineralogy
In addition to the XANES spectra, the elemental composition determined from microscopic X-ray fluorescence measurements of each iron-containing particle was investigated to further understand iron mineralogy.molar units (mol cm −2 ) and compared to the iron molar concentration of each particle.Collectively, the iron data showed no strong correlation with any element (r 2 <0.20) for Fe mol vs. X mol, where X represents elements from Al to Mn.Although a portion of single particle elemental data had low elemental molar concentrations, two clear trends emerged when Fe (mol) was plotted against Al (mol) as in Fig. 5.The trends indicate that iron-containing particles could be divided into two groups.The first group, comprising the majority of particles (163 out of 221, 74 %) (Fig. 5: outlined by the blue area), were low in silicon (Si molar concentration <0.1), yet contained a relatively consistent fraction of aluminum, in a 4.9:1 Fe:Al molar ratio (r 2 = 0.81,p < 0.05, e.g.within 0.05 statistical significance level).The aluminum content of these particles greatly exceeded trace aluminum levels that would be expected in pure iron oxide minerals, which ideally contain only Fe, O, and OH.Iron is commonly substituted by cations of similar size and charge, like aluminum, in iron oxide matrices, and is often observed in crustal particles (Cornell and Schwertmann, 2003).For example, aluminumsubstitution observed in goethite can vary from 0-33 % on a molar basis.This data coupled with the XANES spectra suggest that these particles are likely Al-substituted Fe-oxides.
The second group of particles (26 out of 221, 12 %), shown in Fig. 5, is characterized by lower iron concentrations and enhanced levels of silicon (Si molar concentration >0.1) and aluminum (Al molar concentration >0.1) relative to the first group of particles, referred to here as Fe-aluminosilicates.The silicon content of these particles strongly correlates with aluminum, with a 1.4 Si/Al molar ratio, (r 2 = 0.72, p < 0.05), which compares well to Si/Al molar ratios of common aluminosilicate minerals (typically 1 to 4) (Deer et al., 1978).Though the XANES spectra of a few of these particles (2 out of 26 particles, Sample 2 spectra in Fig. 4) resembled spectra of common iron-containing aluminosilicates, the majority of the spectra for these particles (24 out of 26 particles, Sample 1 spectra in Fig. 4) are best matched by the common iron oxides.This result indicates that a majority of these particles contain a significant amount of iron in the form of oxides, which are oxidation products of Fe-aluminosilicates (Deer et al., 1978).The Si/Al molar ratio coupled with XANES spectra, which indicates iron oxide, suggests these particles are processed Fe-aluminosilicates.The remaining 14 % of iron-containing particles did not correspond to trends observed in either Al-substituted Fe-oxides or processed Fealuminosilicates, thus, their mineralogy was undetermined.
Mineralogy of iron-containing particles at different sites
Figure 5 also shows the distribution of Al-substituted Feoxides and processed Fe-aluminosilicates particles at urban and rural sites.Al-substituted Fe-oxides and processed Fealuminosilicates are observed at both urban and rural sites.
For the urban sites, South DeKalb and Fort McPherson, show a mixture of both types of particles, regardless of season, while Fire Station 8 particles were exclusively associated with Al-substituted Fe-oxides for both winter and summer (Fig. 5, with the exception of 1 point).In addition, Fort Yargo contained both types of particles during the winter, but was exclusively associated with processed Fe-aluminosilicates in the summer.The predominance of Al-substituted Fe-oxide and processed Fe-aluminosilicate particles from our samples is consistent with a previous study showing iron aerosols collected in an urban area in Germany over a 5-month period are comprised of 78 % iron oxides and 22 % Fe silicates (Hoffmann et al., 1996).Though the elemental Fe, Al, and Si molar concentration for a portion of single iron-containing particles was low (as shown in Fig. 5), thus, mineralogy was difficult to discern, our results clearly demonstrate that variations in iron mineralogy do exist in urban and rural PM 2.5 .However, analysis of more samples (e.g.> 2 samples per site) is necessary to establish reasonable spatial and seasonal trends in bulk iron mineralogy.
Investigating factors that control fractional iron solubility
Soluble and total iron content was determined for all the urban and rural filters for different seasons using ferrozine and micro X-ray fluorescence measurements.Results from these analyses are presented in Table 2.A significant amount of variability was observed in soluble iron on filter samples collected in urban areas, with the concentrations ranging from 3.4 to 47.9 ng m −3 , while soluble iron content on samples from rural areas was comparatively low ranging from 4.3 to 5.8 ng m −3 .These concentrations are typical of fractional iron solubility in urban and rural aerosol in the Midwestern US (Majestic et al., 2007).A wide range of total iron concentrations was also observed on the samples, ranging from 15 and 1734 ng m −3 .Although the total iron calculated had moderate uncertainty (Table 2), the majority (e.g. 7 out of 8) of the iron from our samples was within the range (mean ± standard deviation) of typical iron concentrations observed in urban and rural sites in the Southeastern US (Table S2).However, the iron concentration (1734 ng m −3 ) observed at the urban site Fire Station 8 during the winter was much higher than typical concentrations observed in Atlanta, GA and urban Southeastern US sites.Although this concentration was observed at Fire Station 8, which is characterized as an urban Atlanta site with poor air quality (e.g.PM mass concentration generally 30 % greater than other Atlanta sampling sites (Trail, 2010)), the total concentration was probably a direct result of an uneven distribution of iron on the filter.Iron on this particular elemental map may have been concentrated with respect to the remaining sample area, leading to an overestimation of total iron collected on the filter.This result reflects the moderate uncertainties associated with calculating absolute concentrations of aerosol components using synchrotron-based technology.With the exception of the Fire Station 8 winter sample, the majority of data are within the acceptable range of typical iron aerosol concentrations in the Southeastern US.However, for further statistical analysis, the Fire Station 8 sample was omitted.
To investigate solubility in relation to other variables (e.g.iron speciation), soluble iron concentration was normalized to total iron content determined by micro X-ray fluorescence measurements to yield fractional iron solubility (e.g.soluble iron/total iron content).Fractional iron solubility was between 2 and 38 % (mean: 15.8±11.8%) at individual urban and rural sites during different seasons (individual site data in Table 2).Though moderate uncertainty was associated with these solubility estimates (23-55 % error, predominately due to total iron measurements), it did not appear to have a significant impact on the trends observed in solubility (see Table 2 and Fig. 6).Overall, the fractional solubility levels observed in this study compare reasonably well to those found in common iron oxide (<1 %) and silicate (3-6 %) minerals (Journet et al., 2008;Schroth et al., 2009), suggesting our mineralogy data correspond well to expected fractional solubility levels.Fractional iron solubility was compared to a number of variables to assess their influence.No clear relationship was found between fractional iron solubility and total iron content (r 2 = 0.004,p > 0.05).These results are consistent with several studies, which have reported fractional iron solubility as an inconsistent fraction of total iron, ranging anywhere between 0 to 80 % of total iron (Baker andCroot, 2010, Mahowald et al., 2005).In addition, speciation (oxidation state and mineralogy) was also compared to fractional iron solubility.Figure 6 shows a moderate relationship between fractional iron solubility and median single particle Fe(II) data (r 2 = 0.56, Table 2); however, this trend is not statistically robust (p > 0.05).In addition, given the limitations of this analysis (large variations observed in single particle Fe(II) content and small sample size), it is difficult to determine whether or not this moderate trend actually exists.However, our data suggests that Fe(II) content may not significantly impact fractional iron solubility.The absence of trend with speciation is more profound when fractional iron solubility is compared to variations in mineralogy observed at different sites.The most significant difference in the mineralogy of particles in this study was observed between Fire Station 8 site (summer and winter) and the Fort Yargo site during the summer, where particles were exclusively associated with Al-substituted Feoxides and processed Fe-aluminosilicates, respectively.No significant difference in fractional solubility was observed at these two sites, suggesting iron speciation is not the only factor influencing solubility.
Although no clear relationship between fractional iron solubility and abundance of major iron phases was observed in this study, several laboratory experiments on iron-containing crustal and oil fly-ash particles provide evidence supporting this relationship (Journet et al., 2008;Schroth et al., 2009).One reason explaining the lack of trend in this study is the two dominant mineral phases observed (e.g.Al-substituted Fe-oxides and processed Fe-aluminosilicates) have similar fractional iron solubility levels (iron oxides (<1 %) and iron silicates (3-6 %) (Journet et al., 2008;Schroth et al., 2009)).Thus, various mixtures of these two phases are not expected to yield large variations in solubility, which is the case in our data where fractional iron solubility is low and only slightly variable among different sites.A stronger association between mineralogy and solubility, however, is expected in areas where highly soluble iron minerals (e.g.iron sulfate) are the dominant source of iron in aerosols.For instance, iron oxides and silicates in crustal particles (<6 % fractional solubility) are significantly less soluble than iron sulfates in oil fly ash (∼80 % fractional solubility) (Journet et al., 2008;Schroth et al., 2009).No iron sulfates, which are observed in anthropogenic combustion sources, were observed in this study.However, Liu et al. (2005) showed that various anthropogenic combustion sources comprise a small, but measureable component (<10 %) of Atlanta PM 2.5 .Given the limited single particle analysis used in this study (e.g.limited sample size), the small fraction of iron-containing particles from anthropogenic combustion sources may not have been detected.This small fraction of iron sulfates may have contributed to the somewhat enhanced solubility levels observed in our data at several sites (>6 %) compared to that of pure iron oxide (<1 %) or silicate (3-6 %) minerals.In addition, low levels of iron sulfates may also explain the minor differences in solubility levels between different sites and seasons.
Another factor possibly affecting the relationship between mineralogy and solubility in this study is that ambient aerosol may have undergone a variety of atmospheric processes altering its chemical and physical properties.Oakes et al. (2010) showed evidence for enhancements in Fe(II) aerosol solubility in acidic sulfate plumes in Georgia.Secondary sulfate is a significant portion of particle mass (∼50 %) in the Southeastern US (Liu et al., 2005), which can form sulfuric acid in the absence of sufficient neutralizing cations.In this study, acid-processing mechanisms may play a more central role than speciation or work synergistically with speciation to influence iron solubility.This would be consistent with Shi et al. (2011) who demonstrated that fractional iron solubility, in dust aerosols is highly sensitive to particle acidity (e.g.acid-processing mechanisms) and less dependent on other factors (e.g.particle source/mineralogy and size).In addition to acid-processing mechanisms, particle size can play an important role in fractional iron solubility.In this study, the size range of particles was limited (between 0.4-2.5 µm in aerodynamic diameter, lower size resolution based on the spot size of X-ray beam), and minor differences in particle size were difficult to detect.Thus, the relationship between particle size and fractional iron solubility could not be evaluated.More detailed studies involving particle pH, size, iron speciation, and fractional iron solubility are necessary to better understand factors that influence iron solubility in ambient aerosols.
While the results of this study clearly demonstrate the value of single particle synchrotron-based analysis in determining aerosol speciation, there are uncertainties associated with this approach.One uncertainty is representing bulk sample properties with single particle measurements, which comprise a limited portion of the sample.Though the results in this study showed unique variations in single particle iron speciation, these properties may not be representative of the entire sample, making it difficult to compare to other bulk properties (e.g.fractional iron solubility).The results of this study show the need for both single particle and bulk iron speciation in order to fully understand its impact on fractional iron solubility in aerosols.
Atmospheric implications: insight on human health toxicity
Fine aerosols that contain iron have been shown to generate toxic levels of ROS (Shafer et al., 2010;Zhang et al., 2008).
Recent experiments have related toxicity to iron oxidation state in nanoparticles (diameters smaller than 100 nm).Reduced iron in nanoparticles, either present as water-soluble or crystalline Fe(0) or Fe(II), has been shown to be more efficient than Fe(III) in ROS generation (Auffan et al., 2008;Keenan et al., 2009).For example, Auffan et al. (2008) showed oxidation of Fe(0) and Fe(II) oxides (e.g.magnetite) immediately produce ROS, while Fe(III) oxides (e.g.maghemite) produced little to no ROS within one hour.Although particles in this study are larger (approximately 0.4-2.5 µm) and presumably less reactive than nanoparticles due to less surface area per mass, the same mechanisms are likely involved in the formation of ROS via iron-mediated pathways (e.g.Fenton reactions).Thus, Fe(II) is a plausible precursor for immediate production of ROS in PM 2.5 .The ambient particles we investigated contained various amounts of Fe(II) and Fe(III), with the Fe(II) fraction accounting for approximately ∼5 to 35 % of total iron.These results indicate that a significant portion of iron-containing particles is in a redox state that can produce ROS immediately.Although Fe(II) is not always soluble in ambient aerosols (a factor strongly associated with ROS formation), particlebound Fe(II) may interact with specific species (e.g.acidic aerosol) during atmospheric transit, promoting its solubility.
Conclusions
We present a novel approach for exploring the speciation of iron in single atmospheric fine particles collected in urban and rural regions during different seasons using synchrotronbased XANES spectroscopy and microscopic X-ray fluorescence techniques.The majority of the particles contained mixtures of oxidized (Fe(III)) and reduced (Fe(II)) iron, with an average of 25 % of the iron present as Fe(II).Particulate iron from urban and rural sites in Georgia was observed primarily in two phases, Al-substituted Fe-oxides and processed Fe-aluminosilicates.Though the composition of these aerosols was substantially different than pure minerals, it was consistent with modifications that occur during oxidation processes.Based on the techniques used in this study, variations in the abundance of Al-substituted Fe-oxides and processed Fe-aluminosilicates did not coincide with the fractional iron solubility.Fractional iron solubility may be controlled by iron minerals from minor sources, for example, anthropogenic combustion sources of iron sulfates that were not detected by XANES as a component of overall mineralogy.In addition, other physical or chemical properties (e.g.particle acidity or size) may act in conjunction with mineralogy to influence solubility.These other properties may control the toxicity of iron-containing particles more than bulk mineralogy.
Fig. 1 .
Fig. 1.Elemental maps (30 × 30 µm) of iron (red), aluminum (green), and silicon (blue) from South Dekalb 11/11/08 filter sample are presented.The fourth map is a colocation map, where the iron map is superimposed on aluminum and silicon maps.The white particles on the colocation plot indicate that iron, aluminum and silicon are concentrated in this area.The yellow circles on the colocation plot indicate 3 iron-containing particles that are enriched in aluminum and silicon.
Fig. 3 .
Fig. 3. (A)XANES spectra of a representative oxidized particle (blue line) and reduced Fe particle (red line) (B) Example of the normalized pre-edge centroid position of a representative oxidized particle (blue line) and reduced particle (red line).
Fig. 5 .
Fig. 5. Scatter plot of iron and aluminum molar concentration in iron-containing particles identified on urban and rural filters.The color scale denotes the silicon content in the particles.The blue outline represents single particles that are Al-substituted Fe-oxides.Particles from urban are represented by open colored circles.Particles from Fort Yargo summer and Fire Station 8 sites are represented by colored diamonds and black Xs, respectively.
Table 2 .
Solubility Results for Urban and Rural Filters.Number of XANES elemental maps used to determine total Fe concentration e Measured by XANES spectroscopy, mean value of single particle oxidation state calculated for each filter f Outlier: Concentration is outside the standard deviation of typical total PM 2.5 Fe concentration measured on filters collected in urban and rural areas in Southeastern US (seen in c Measured by XANES spectroscopy d | 8,993.6 | 2012-01-16T00:00:00.000 | [
"Physics"
] |
Numerical Analysis of Adhesive Joints with Bi-Layered Adherends
Improvement of joint quality has always been a call for concern in an assembled component due to numerous application of such components in immense industrial sectors such as aerospace and automotive industries, electrical/electronic industries etc. With this in mind quite a vast research had been carried out and still on going to enhance the characteristics of various parts that makes up an assemble and the complete assemble itself. Part of the research carried out fixated on adhesive bonded joint of different geometrical designs such as L-shape, single-lap, double-lap, tubular, T-shaped, stepped and scarf joints etc. Focusing on single –lap joint due to the simplicity of the design geometry and their ease of fabrication, various forms of arrangement have been researched on. But owing to its untimely failure, caused by high strength concentration in the overlapping areas and some other parts that are delicate to peel damage various forms of design geometry have been adopted ranging from tapering, stepping and wavy lapping of overlapping layers in order to reduce or mitigate the stress concentration for an improved load bearing ability. Due to a lot of challenges being faced with difficulty in fabrication, this study focused on double layer single-lap joint. Its harmonic response when subjected to external dynamic loading was investigated with the use of Ansys finite element analysis. The numerical analysis was carried out on various forms of adhesive-adherends arrangement, and from the harmonic response obtained it showed that the double layer single-lap joint have improved load bearing capacity as compared with other geometrical designs.
Abstract-Improvement of joint quality has always been a call for concern in an assembled component due to numerous application of such components in immense industrial sectors such as aerospace and automotive industries, electrical/electronic industries etc. With this in mind quite a vast research had been carried out and still on going to enhance the characteristics of various parts that makes up an assemble and the complete assemble itself. Part of the research carried out fixated on adhesive bonded joint of different geometrical designs such as L-shape, single-lap, double-lap, tubular, T-shaped, stepped and scarf joints etc. Focusing on single -lap joint due to the simplicity of the design geometry and their ease of fabrication, various forms of arrangement have been researched on. But owing to its untimely failure, caused by high strength concentration in the overlapping areas and some other parts that are delicate to peel damage various forms of design geometry have been adopted ranging from tapering, stepping and wavy lapping of overlapping layers in order to reduce or mitigate the stress concentration for an improved load bearing ability. Due to a lot of challenges being faced with difficulty in fabrication, this study focused on double layer single-lap joint. Its harmonic response when subjected to external dynamic loading was investigated with the use of Ansys finite element analysis. The numerical analysis was carried out on various forms of adhesive-adherends arrangement, and from the harmonic response obtained it showed that the double layer single-lap joint have improved load bearing capacity as compared with other geometrical designs.
I. INTRODUCTION
With the zeal to enhance the quality of joints in an assembled component, and due to an extensive application of adhesive bonded joint in vast industrial sectors such as aerospace and automotive industries, electrical/ electronic industries, there have been an increase in the approaches used to study the response of adhesively bonded joints. Different geometries can be obtainable for adhesively bonded joints depending on the area and type of application. Some of them are L-shape, single-lap, double-lap, tubular, T-shaped, stepped and scarf joints [1] There are a lot of advantages of adhesively bonded joint over traditional mechanical fasteners. Some of the advantages can be seen in aircraft where fibre reinforced polymer matrix composites (FRPCs) are used for improved damaged tolerance and lower structural weight designs [2]. Published The most common disadvantage associated with adhesively bonded joint is the fatigue damage they experience which may be due to continuous vibration, crash and impact and for this reason it became necessary to study their dynamic response or behavior.
The approach for the dynamic behavior analysis may be experimental, numerical or analytical [3], but in this study one out of the three approach will be considered which is the numerical finite analysis.
II. LITERATURE REVIEW
Adhesively bonded joints have found usage in many industrial and even non-industrial applications in order to merge together similar materials like every other mode of joining and non-similar materials; a very peculiar advantage [4]. Industrial sectors like automobile, marine, railway and specifically, aircraft [5] to build complex structures of whose dimensions are immense.
Researchers over the years have studied how adhesively bonded joints behave under stressed situations and how to reduce stress concentrations in adhesively bonded joints which often times lead to fatigue and consequent damage of the joint and adherends because moving machinery are subjected to vibrations, collisions and impact [6][7]. These manners put by these joints can be researched in the analytic way , studied numerically [9][10][11] and examined experimentally [12][13][14][15].
Different analysis, studies and examinations have gone ahead since Volkerson's first work when he derived the shear-lag model [16]. Further analytical research was done which tends to consider the adherend's bending manner and this lead to the proposition of a model by Goland and Reissner [17] which was based on the stress analysis of beams. Also, the classical work of Hart-Smith [18][19][20] which was one of the breakthroughs in the analytical approches to this study, simplified the complex stress-strain behaviour by the application of elasto-plastic or bilinear curve.
Since then, lots of analytical research has been done for example Delale et al. derived a two-dimensional solution for assemblies bonded together [21]. Also, shear and peel stresses in both metallic and composite adherends have been studied and analytical models like that of Zou and others have been developed [22]. By considering double lap bonded geometry as an overlaminate joint, Osnes and MacGeorge [23] under loaded condition investigated the shear stress on the joint. Their work was later extended by themselves to take account the adhesive's elastomeric behaviour [24].
Concerted efforts have also been made not only on @ Numerical Analysis of Adhesive Joints with Bi-Layered Adherends Ahmed H. Sayegh, Khalid Almitani, and Ramzi Othman analytical method as highlighted above but also analysis using a very powerful and efficient tool like the Finite Element Analysis (FEA) to show how adhesively jointed single lap joints behave and how their strength have been affected under the influence of shear and peel stresses by Magalhaes and co. [25], failure mechanisms by Kim K. and others [26], overlap length by Campilho et al. [27,28], adhesive damage by Benyahia and co-workers [29], influence of cohesive variables by Fernández-Cañadas and others [30] and patch dimensions which was carried out by Fekih et al. [31]. It has been found that joint strength can be increased when uniform load transfer occurs which can be brought about by the plastification of ductile adhesives. This research was carried out by Keller [32] [34] and results showed that near the ends of the adhesive region, the peel and shear stress values were the greatest. Not more than two years ago, William and Jialai [35] also developed a 3D model which is based on the Finite Element Analysis to cater for the setback of the 2D model by Goland and Reissner [36] and using functionally graded materials as substrates. William and Jialai's work was geared because of the fact that the two parameters, elastic foundation (2-PEF) model cannot capture the boundary conditions where the shear stresses are significantly more and different. Their work was a success and the three parameter, elastic foundation was developed to cater for the flaw and his results showed that stress concentrations near the ends of the joint can be brought down by bringing down Young's modulus of the adhesive layer, increasing the adhesive layer thickness, and/or making sure that the FG adherends are configured to make the stiffer part is nearest to the adhesive layer [35]. A lot of research based upon experimental analysis have been examined for example, recently, Prabhakar and Garcia found out that using 3D printed reinforcement (i.e. infusing structural strengtheners to the adherends through fused deposition modelling (FDM) additive technique, the visible shear strength of adhesively bonded single lap joints can be improved to about 832% [2]. Also, research on bonded adhesives by exploring the field of nano-science was carried out by Salih, Iclal and others [37] based on their experimental analysis on adhesively bonded single lap joint by nano-composite adhesives which were obtained by adding nanostructures, they varied the concentration of three different nanostructures in the adhesive and experimental failure loads imparted on them. They observed that the nanocomposite adhesive manufactured increased the joint's load failure and the rate of increase depended on the structural make-up of the adhesive and the type of nanostructure used. However, the result at 0.01 ratio is the best.
All the researches highlighted above be it numerical, analytical and experimental, the author to the best of his knowledge has not come across the study of how an adherend will behave when subjected to vibrations while the other adherend is fixed at the other end. This study will be done by exploring the harmonic response of singly supported double adhesive single lap joint. The only similar study that was found was that done by Khalid and Ramzi when they subjected single-lap and double-lap joints to axial loads while the other end was fixed [1]. While one of the adherend's end will be fixed, the other will be struck with a hammering gesture. This will set up a vibration in the system. The response as earlier said will be analysed numerically using the Finite Element Analysis (FEA) approach.
Numerical Model
With the use of ANSYS finite element software package a numerical analysis had been carried out to show the harmonic response of the double similar/ or dissimilar adherent single lap joint when subjected to harmonic load.
The model design has a z-direction (width) that is much greater than its thickness, therefore it is assumed to be plane strain with a 2-dimensional configuration. This also means that the strain along the cross section is much more than zdirectional plane strain.
With the adoption of a 2-dimensional model design, the adhesive is a single layer thickness of 1mm with a double layer adherent of 1mm each. The adherent is made up of Aluminum and Steel arranged in two different ways in order to achieve varying response when subjected to harmonic loads.
The adhesive used has a young modulus of 4.1E8Pa with a Poisson ratio of 0.4 and a density of 1800 kg/m3. The properties of the adherents used is given in table I.
The geometrical design is done in such a way that the overlap region of the adhesive/adherent joint have a length of 50mm while the extended parts are of length 100mm each. With this, the length of the adherents is 150mm. The sensitivity of resonance frequency with respect to change in mesh size was observed by conducting convergence analysis which shows that a slight variance in mesh size from 0.1mm to 1mm has little effect on the outcome of the values of natural frequencies obtained. Due to this, a fine mesh of element size 0.25mm was applied on the adherends while that of the adhesive was 0.1mm. Figure 1 shows the meshing of the model design. A SURF 153 element was used. This element has three nodes and two degrees of freedom per node with axial and transverse displacement. The boundary conditions is subjected in such a way that left corner of the upper layer of the adherend is fixed while the lower layer of the adherend is subjected to harmonic force of 1N. This is achieved by ensuring that the two adherend layers (upper and lower) are prevented from moving in transverse direction. Figure 2 shows the boundary condition imposed on the model design. The steps carried out while performing finite element analysis with the use of Ansys workbench platform was to firstly assign material properties to the adherends and adhesive. Afterwards, the design geometry was produced with the use of bonded contact conditions applied between each layer of adherends and adhesives. Then meshing and modal analysis were carried out to generate the first six (6) resonance frequencies and mode shapes. Finally, with the use of generated frequencies and mode shapes, the frequency response graphs were plotted using mode superposition technique.
A. Validation
So as to authenticate our model which is based on frequency response of the joint to harmonic loads, several simulations were carried out. With the use of multilayers of Aluminum and steel as adherends arranged sequentially in different fashion, the properties of both Aluminum and steel can be seen as shown before in Table 1. While the adhesive used has a young modulus of 410Mpa, Poisson's ratio of 0.4 and a density of 1800kg/m 3 . Two different dissimilar joints of arrangements of Aluminum and steel are studied and various frequency responses are recorded and analyzed. Table II shows the two different arrangement of the double layer single lap joint and all frequency response were calculated from the range of 1000Hz to 100 kHz.
The frequency response of the first double layer arrangement (Al-St-Ad-Al-St) can be seen as illustrated in Fig.3. The first natural frequency for numerical modal analysis is 7202 kHz. The second double layer arrangement (Al-St-Ad-St-Al) was also analyzed for frequency response which is shown in Fig.4. The first natural frequency for numerical modal analysis is 7205 kHz. Shown in Table III below is the results of the first, second, third and fourth natural frequencies studied and the error margin between the two arrangements of the double layer single lap joint.
V. CONCLUSION
Using double layer single lap joint model of different adhesive arrangement, the harmonic responses of the joints were derived. The Numerical approach was generated with ANSYS finite element software. The numerical analysis is in conformity they showed that the double layer single-lap joint has improved load bearing capacity from the results of the harmonic response obtained.
From the numerical analysis carried out in this research, it can be said that the natural frequencies generated is dependent on the order of substrate or adherends arrangement. Also, judging from the numerical analysis results shown and discussed in the previous chapters, it can be said that the double layer single lap joint is stable since the four (4) natural frequencies obtained from the 1st (Al-St-Ad-Al-St) and 2nd (Al-St-Ad-St-Al) arrangement increases in a consistent manner. For future reference, it may be desirable to do more different adhesive arrangement as these may get improved results. | 3,435.6 | 2020-06-28T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Integral Membrane Protein 2A Is a Negative Regulator of Canonical and Non-Canonical Hedgehog Signalling
The Hedgehog (Hh) receptor PTCH1 and the integral membrane protein 2A (ITM2A) inhibit autophagy by reducing autolysosome formation. In this study, we demonstrate that ITM2A physically interacts with PTCH1; however, the two proteins inhibit autophagic flux independently, since silencing of ITM2A did not prevent the accumulation of LC3BII and p62 in PTCH1-overexpressing cells, suggesting that they provide alternative modes to limit autophagy. Knockdown of ITM2A potentiated PTCH1-induced autophagic flux blockade and increased PTCH1 expression, while ITM2A overexpression reduced PTCH1 protein levels, indicating that it is a negative regulator of PTCH1 non-canonical signalling. Our study also revealed that endogenous ITM2A is necessary for timely induction of myogenic differentiation markers in C2C12 cells since partial knockdown delays the timing of differentiation. We also found that basal autophagic flux decreases during myogenic differentiation at the same time that ITM2A expression increases. Given that canonical Hh signalling prevents myogenic differentiation, we investigated the effect of ITM2A on canonical Hh signalling using GLI-luciferase assays. Our findings demonstrate that ITM2A is a strong negative regulator of GLI transcriptional activity and of GLI1 stability. In summary, ITM2A negatively regulates canonical and non-canonical Hh signalling.
Introduction
The Hedgehog (Hh) signalling pathway has essential functions in embryonic development, tissue homeostasis and cancer [1]. PTCH1 is the most important Hh receptor and a well-established tumour suppressor [2]. The canonical signalling pathway is initiated by binding of one of the Hh ligands (Shh, Ihh, or Dhh) to the 12-transmembrane receptor Patched1 (PTCH1). In the absence of Hh ligands, PTCH1 represses the activation of the G protein-coupled receptor Smoothened (SMO), resulting in processing of the GLI2 and GLI3 transcription factors into repressors. Binding of a Hh protein to PTCH1 inhibits its intrinsic activity, believed to be the transport of cholesterol across the plasma membrane to generate a localized gradient, leading to derepression of SMO and its accumulation at the primary cilium. Ciliary SMO induces activation of GLI2 and GLI3 and prevents their processing, initiating a GLI-dependent transcriptional response that results in upregulation of the short-lived strong activator GLI1 and of the Hh protein-sequestering membrane proteins PTCH1, PTCH2 and HHIP, among other cell type-specific targets.
More recent studies have revealed that PTCH1 has additional functions independently of SMO/GLI, known as "Type I non-canonical Hh signalling" to differentiate it from SMO-dependent, GLI-independent processes [2,3]. The C-terminal domain (CTD) of PTCH1 is dispensable for canonical Hh signalling [4,5], although one study suggests that the CTD is necessary for PTCH1 localization to the primary cilium [6]. The first identified role of the CTD was a pro-apoptotic function, mediated by interaction with the DRAL/TUCAN/Caspase9 complex and independent of SMO [7][8][9][10]. With the goal of identifying additional proteins that interact with the CTD of PTCH1, we previously performed a yeast-2-hybrid study, in which we identified the autophagy-related protein ATG101, a subunit of the ULK complex that mediates initiation of autophagy [11]. Our study established that the cytosolic CTD of PTCH1 inhibits autophagic flux by impairment of autophagosome-lysosome fusion or acidification and that its interaction with ATG101 was important for that function. Paradoxically, ATG101 regulates autophagy initiation and expansion of phagophore membranes to form the autophagosome and no role in the terminal step of autophagy has been discovered. This led us to hypothesize that PTCH1 might block autophagy completion with the aid of additional interacting proteins. In this study, we focus on the potential regulatory role of integral membrane protein 2A (ITM2A), which was identified in a large proportion of our positive clones in the yeast-2hybrid screen. Remarkably, out of 63 sequenced and validated positive hits, ITM2A was recovered in 21 of them (33%). A novel function of ITM2A in blocking autophagic flux by an inhibitory interaction with v-ATPase was reported a few years ago [12], suggesting a potential functional interaction with PTCH1. However, the role of ITM2A in autophagy remains controversial since it was proposed to induce autophagy in breast cancer cells [13].
ITM2A is a type II single-pass membrane protein with intracellular N-terminal domain and an extracellular BRICHOS domain [14]. Two alternatively spliced forms of ITM2A were described: the long isoform of 263 aa (30 kDa) and a shorter isoform of 219 aa lacking the transmembrane domain (25 kDa). Although the function of ITM2A is not completely elucidated, it has been linked to differentiation of skeletal muscle, cartilage tissue and T cell development [15,16]. Early reports indicate that ITM2A levels increase during skeletal muscle differentiation and the formation of myotubes in vivo and in C2C12 cells, a model of in vitro myogenesis [15,17,18]. Overexpression of ITM2A can accelerate myotube formation; however, it is not essential for muscle development, as indicated by in vitro knockdown studies and in conditional knockout mice [17,18]. Canonical Hh signalling prevents myogenic differentiation by keeping C2C12 and primary satellite muscle cells in the proliferative state [19]. In particular, Gli1 and Gli2 were shown to repress MyoDdependent transcription, preventing terminal myotube differentiation [20]. This suggests that ITM2A function could be inversely correlated to canonical Hh signalling activity.
Given the involvement of ITM2A in biological processes that are regulated themselves by different types of Hh signalling and the potential physical interaction with PTCH1, we decided to investigate their crosstalk.
Reagents
The antibodies used in this study (name, catalog number, vendor and dilution used in Western blotting) is described in Supplementary Table S1 To generate shRNA stable cells, cells were transduced with lentiviral particles, encoding scrambled or ITM2A targeting shRNA sequences (Table 1), produced in HEK293T cells as previously described [21]. Table 1. shRNA sequences used to silence ITM2A in human and mouse cell lines.
Name
Sequence (5 -3 ) C2C12 cells were cultured in DMEM containing 10% fetal bovine serum and penicillin/streptomycin in a humidified 5% CO 2 incubator. In order to keep the cells in the undifferentiated state, they were split when reaching 70% confluency. Differentiation was induced by changing the medium of 100% confluent cultures by DMEM containing 2% horse serum (Sigma cat. H1270) and penicillin/streptomycin. The day the differentiation medium was added was called day 0, and the cells were kept for 7 days with media refresh every other day. The generation of stable shRNA ITM2A and control C2C12 cells was performed by transduction of undifferentiated cells with lentiviruses encoding hairpin RNAs (Table 1), followed by selection with puromycin.
Quantitative PCR
Total RNA was isolated from each of the three cell lines with RNeasy Mini Kit (QI-AGEN, Manchester, UK; cat 74104) and quantified by A260nm/A280nm in a Nanodrop. Synthesis of cDNA was performed using 1 µg RNA with iScript cDNA Synthesis Kit (BioRad) using hexarandom primers following the manufacturer's instructions. Realtime quantitative PCR was performed using 1-5 µL of cDNA and target-specific primers ( Table 2) with the SsoFast EvaGreen Supermix (BioRad) with a CFX Connect Real-Time PCR Detection System (BioRad) thermocycler. Amplification was quantified and expressed as fold using the ∆∆Ct method. reagent and 8 µg total DNA. For co-immunoprecipitation (co-IP) assays, two different plasmids were used in the same reaction (4 µg of each plasmid). Twenty-four hours after transfection, cells were washed with 2 mL of ice-cold 1× PBS and scraped in 700 µL of co-IP lysis buffer (50 mM Tris HCl pH 7.5, 150 mM NaCl, 1% NP-40, 0.05% Sodium deoxycholate, 1 mM EDTA, 2.5 Mm MgCl 2 ) supplemented immediately before use with 1× Proteoloc protease inhibitor cocktail (Gentaur, London, UK; cat. 44204), 0.2 mM PMSF and 1 mM DTT. Lysates were incubated at 4 • C for 30 min with rotation and then centrifuged at 13,000 RPM for 15 min at 4 • C. The supernatant was separated and 200 µL set apart as whole cell lysate (WCL) and mixed with 40 µL of 6× Laemmli buffer. The remaining supernatant was incubated with 4 µg primary antibody (Table 3) at 4 • C with rotation for 1.5 h, followed by the addition of 30 µL Dynabeads Protein G (Thermo Fisher; cat. 10003D) for an additional 1 h incubation at 4 • C with rotation. A magnetic rack was used to collect and wash the beads three times with 1 mL of co-IP lysis buffer. Immunoprecipitates were extracted from the bead by the addition of 18 µL of 2× Laemmli buffer. Both beads and WCL were incubated on heat block at 45 • C for 25 min and stored at −80 • C for Western blotting.
Gli-Luciferase Assay
NIH3T3 cells and Ptch1 −/− mouse embryonic fibroblasts (MEFs) were used for this assay. Cells were grown in a 10 cm culture dish to an approximate confluence of 90%, trypsinised, and seeded in 24-well plates at a 1:5 dilution in growth media. Cells were transfected 24 h after plating using different transfection reagents for each cell line: FuGENE ® HD (Promega, Southampton, UK; cat. E2311) for NIH3T3 and TransIT-X2 ® (Mirus Bio, Madison, WI, USA; cat MIR6003) for MEFs.
Assays were always conducted in triplicate. For each well, a fixed amount of p8xGBS-Luc (135 ng), pRL-SV40 or pRL-TK (15 ng/well) and testing plasmid DNA (375 ng, containing a single plasmid or a combination of plasmids) and 1.5 µL of the transfection reagent were used, following the manufacturer's instructions. After 24 h, transfected cells were incubated with serum starvation media (DMEM, 0.5% FBS, 1% GlutaMAX) for an additional 48 h. Cells were washed with 0.5 mL of PBS and lysed with 100 µL passive lysis buffer with vigorous shaking at room temperature for 15 min. Dual-luciferase reporter assay system (Promega; cat. E1910) was used to measure the activity of Firefly-luciferase and Renilla-luciferase using a Promega GloMax 20/20 luminometer (cat. E5311). Values of the individual measurements were generated as relative luciferase units (RLUs) and as Firefly-luciferase/ Renilla-luciferase ratio.
Mass Spectrometry
To prepare the samples for proteomic analysis, 3 × 10 5 HEK293 cells were seeded in 10 cm culture dishes and transiently transfected 24 h later with HA-ITM2A or empty pcDNA3.1 plasmid, using 8 µg of DNA and 20 µL of Lipofectamine TM 2000. Twenty-four hours after transfection, cells were washed with 2 mL of ice-cold 1x PBS and processed for IP as described above. Samples were submitted in solution form to the mass spectrometry facility of the University of Leeds for the identification of immunoprecipitation-interacting protein partners and for the detection of protein post-translational modifications. Protein digestion of samples was carried out with trypsin. Data was processed by the facility using the software Peaks (Bioinformatics Solutions, Waterloo, ON, Canada).
STRING Protein Association Network
An interactome map was constructed using STRING for the ITM2A-specific identified proteins by mass spectrometry analysis. The interactome was constructed with the STRING database using default settings; the confidence score was set to medium (0.4) with network edges showing the confidence of an interaction.
Statistical Analysis
GraphPad Prism 9 was used for the statistical analysis and for the generation of graphs. Unless otherwise specified, three biological replicates were performed. Error bars are shown as standard error of the mean (SEM). Two-tailed paired Student's t-test was used to analyse the significant difference between two groups. One-way ANOVA analysis was selected to analyse at least three different groups when the samples showed normal distribution and equal variance.
ITM2A Physically Interacts with the Hedgehog Receptor PTCH1
Following the interaction studies in yeast cells, we tested if human PTCH1 interacts with ITM2A in mammalian cells by co-immunoprecipitation. First, we determined that fulllength PTCH1-HA was capable of interacting with myc-tagged ITM2A when co-expressed in HEK293 cells. Reciprocal pulldowns using anti-myc or anti-HA antibodies confirmed the interaction in mammalian cells ( Figure 1A). Unexpectedly, the CTD alone (aa 1176-1447) in soluble form was unable to interact strongly with ITM2A ( Figure 1A). Since the yeast-2-hybrid experiment that initially indicated an interaction of ITM2A with PTCH1 used only the CTD as bait, we considered the possibility that for interaction of the proteins in mammalian cells, co-localisation at the plasma membrane was required. Therefore, we expressed a membrane-associated CTD (through fusion with a myristoylated GFP (CAAX-eGFP-CTD) [10] and compared the interaction to ITM2A with full-length PTCH1-eGFP or empty myr-eGFP as a negative control. Immunoprecipiation of myc-ITM2A resulted in detection of full length PTCH1 and the CTD fragment, indicating that membrane-associated CTD was sufficient for their interaction ( Figure 1B).
To determine which domains of PTCH1 are necessary for PTCH1-ITM2A physical interaction, we generated PTCH1 mutants with deletions of the two largest intracellular domains, the middle loop (ML) and the CTD [5]. Removal of the ML alone (PTCH1∆ML) or both ML and CTD (PTCH1∆ML∆CTD) did not prevent ITM2A interaction, indicating that, in addition to the CTD, the transmembrane and/or extracellular loops of PTCH1 independently interact with presumably different regions of the transmembrane protein ITM2A ( Figure 1C).
PTCH1 has been recently shown to form stable dimers even in the absence of Hh ligands [22]. We reasoned that ITM2A binding to the CTD and additional domains could cluster together two molecules of ITM2A. Thus, we tested if ITM2A is competent to form self-interactions, as such could be permissive for the formation of higher order magnitude complexes with PTCH1 dimers. As shown in Figure 1D, ITM2A forms strong interactions with itself, supporting a stoichiometric model of 2 PTCH1: 2 ITM2A molecules in the complex.
PTCH1 has been recently shown to form stable dimers even in the absence of Hh ligands [22]. We reasoned that ITM2A binding to the CTD and additional domains could cluster together two molecules of ITM2A. Thus, we tested if ITM2A is competent to form self-interactions, as such could be permissive for the formation of higher order magnitude complexes with PTCH1 dimers. As shown in Figure 1D, ITM2A forms strong interactions with itself, supporting a stoichiometric model of 2 PTCH1: 2 ITM2A molecules in the complex.
ITM2A Does Not Mediate PTCH1 Autophagic Flux Inhibition
Since ITM2A was also reported to reduce autophagic flux via interaction with the vacuolar vATPase [12] and we confirmed that ITM2A is a bona fide PTCH1 CTD interactor, we hypothesised that ITM2A could be a mediator of non-canonical PTCH1 signalling in autophagy. Transient transfection of PTCH1 or ITM2A alone increased the levels of LC3BII, the lipidated form of LC3B, a marker of autophagosome formation. However, addition of Bafilomycin A1 during the last 4 h did not increase LC3BII in cells expressing PTCH1 or ITM2A, unlike in cells transfected with empty vector, suggesting that it is the result of LC3BII accumulation as a consequence of an autophagic flux blockade at the last step of the autophagosome-lysosome fusion (Figure 2A,B). Co-expression of PTCH1 and ITM2A had a very similar effect (Figure 2A,B), although we observed a reproducible reduction of PTCH1 expression level when co-expressed with ITM2A (Figure 2A,D), which was not associated with changes in PTCH1 ubiquitylation ( Figure S2). Densitometry-based analysis of the magnitude of the Bafilomycin A1 effect revealed that autophagic flux was reduced by more than 80% in all groups ( Figure 2C).
ITM2A Does Not Mediate PTCH1 Autophagic Flux Inhibition
Since ITM2A was also reported to reduce autophagic flux via interaction with the vacuolar vATPase [12] and we confirmed that ITM2A is a bona fide PTCH1 CTD interactor, we hypothesised that ITM2A could be a mediator of non-canonical PTCH1 signalling in autophagy. Transient transfection of PTCH1 or ITM2A alone increased the levels of LC3BII, the lipidated form of LC3B, a marker of autophagosome formation. However, addition of Bafilomycin A1 during the last 4 h did not increase LC3BII in cells expressing PTCH1 or ITM2A, unlike in cells transfected with empty vector, suggesting that it is the result of LC3BII accumulation as a consequence of an autophagic flux blockade at the last step of the autophagosome-lysosome fusion (Figure 2A,B). Co-expression of PTCH1 and ITM2A had a very similar effect (Figure 2A,B), although we observed a reproducible reduction of PTCH1 expression level when co-expressed with ITM2A (Figure 2A,D), which was not associated with changes in PTCH1 ubiquitylation ( Figure S2). Densitometrybased analysis of the magnitude of the Bafilomycin A1 effect revealed that autophagic flux was reduced by more than 80% in all groups ( Figure 2C). Since the lack of an additive effect of PTCH1 and ITM2A could be the result of maximal inhibition of autophagic flux by overexpression of either of the two proteins, we decided to study the effect of silencing endogenous ITM2A using shRNA. HEK293 cells were transfected with shITM2A and selected with puromycin to generate multiclonal cells; however, ITM2A expression was restored after several passages (data not shown). This might be because as a heterogeneous population, cells expressing higher levels of ITM2A had a growth advantage and their proportion increased during passaging. Given this problem, we isolated shITM2A single cell-derived clones using the limiting dilution method. Three different shRNA sequences targeting ITM2A and a control sequence (shScrambled) were used and clones with the highest knockdown efficiency that was maintained for several passages were used for the following experiments. Clone shITM2A-C maintained a minimum of 65% knockdown ( Figure 3A).
Since the lack of an additive effect of PTCH1 and ITM2A could be the result of maximal inhibition of autophagic flux by overexpression of either of the two proteins, we decided to study the effect of silencing endogenous ITM2A using shRNA. HEK293 cells were transfected with shITM2A and selected with puromycin to generate multiclonal cells; however, ITM2A expression was restored after several passages (data not shown). This might be because as a heterogeneous population, cells expressing higher levels of ITM2A had a growth advantage and their proportion increased during passaging. Given this problem, we isolated shITM2A single cell-derived clones using the limiting dilution method. Three different shRNA sequences targeting ITM2A and a control sequence (shScrambled) were used and clones with the highest knockdown efficiency that was maintained for several passages were used for the following experiments. Clone shITM2A-C maintained a minimum of 65% knockdown ( Figure 3A). Transient expression of PTCH1 in shITM2A-A HEK293 cells showed a small but significant increase in LC3BII and p62 (an autophagy scaffolding protein that is degraded along with the cargo) accumulation by PTCH1 compared to control shScramble cells ( Figure 3B,C) and a larger inhibition of autophagic flux ( Figure 3D). The enhancing effect of ITM2A depletion on PTCH1-dependent autophagy regulation was also observed in HeLa cells (Supplementary Figure S1A,B). Interestingly, silencing of ITM2A resulted in a small increase of PTCH1 expression levels (Figure 2A and Supplementary Figure S1A). Altogether, these findings suggest that ITM2A is not necessary for autophagy regulation by PTCH1, and that it might actually reduce non-canonical PTCH1 signalling by modulating its turnover.
Increase in ITM2A Levels Accompanies Reduction of Autophagic Flux during Skeletal Muscle Differentiation
Given the involvement of Hh signalling and ITM2A in myotube differentiation of the C2C12 myoblastic cell line, and that ITM2A has been reported to be a marker of skeletal muscle differentiation, we investigated their potential interplay during in vitro differentiation of C2C12 myoblasts into myotubes ( Figure 4A). As C2C12 cells differentiate at high density, they upregulate myosin expression from day 3 onwards ( Figure 4B). As previously reported [15], ITM2A levels increase significantly during differentiation ( Figure 4B). To investigate the regulation of autophagy by endogenous ITM2A and endogenous PTCH1, we first investigated if there were any changes in basal autophagic flux during the 7 days of differentiation of C2C12 cells. The level of p62 and LC3BII increased during differentiation, becoming gradually more insensitive to Bafilomycin A1 treatment ( Figure 4C,D). This observation suggests that autophagic flux is reduced in differentiated myotubes compared to proliferating myoblasts. nificant increase in LC3BII and p62 (an autophagy scaffolding protein that is degraded along with the cargo) accumulation by PTCH1 compared to control shScramble cells (Figure 3B,C) and a larger inhibition of autophagic flux ( Figure 3D). The enhancing effect of ITM2A depletion on PTCH1-dependent autophagy regulation was also observed in HeLa cells (Supplementary Figure S1A,B). Interestingly, silencing of ITM2A resulted in a small increase of PTCH1 expression levels (Figure 2A and Supplementary Figure S1A). Altogether, these findings suggest that ITM2A is not necessary for autophagy regulation by PTCH1, and that it might actually reduce non-canonical PTCH1 signalling by modulating its turnover.
Increase in ITM2A Levels Accompanies Reduction of Autophagic Flux during Skeletal Muscle Differentiation
Given the involvement of Hh signalling and ITM2A in myotube differentiation of the C2C12 myoblastic cell line, and that ITM2A has been reported to be a marker of skeletal muscle differentiation, we investigated their potential interplay during in vitro differentiation of C2C12 myoblasts into myotubes ( Figure 4A). As C2C12 cells differentiate at high density, they upregulate myosin expression from day 3 onwards ( Figure 4B). As previously reported [15], ITM2A levels increase significantly during differentiation ( Figure 4B). To investigate the regulation of autophagy by endogenous ITM2A and endogenous PTCH1, we first investigated if there were any changes in basal autophagic flux during the 7 days of differentiation of C2C12 cells. The level of p62 and LC3BII increased during differentiation, becoming gradually more insensitive to Bafilomycin A1 treatment ( Figure 4C,D). This observation suggests that autophagic flux is reduced in differentiated myotubes compared to proliferating myoblasts. We next generated stable C2C12 myoblastic lines with reduced ITM2A expression using three different shRNA sequences. Puromycin-resistant shITM2A cells were analysed for knockdown of ITM2A. Cells stably expressing the shITM2A-B and -C sequences showed 21 and 55% reduction of ITM2A by qPCR, respectively, while shITM2A-A was ineffective ( Figure 5B). We continued our study with the shITM2A-C cells, which showed the best knockdown efficiency. C2C12 shITM2A-C cells presented a morphological delay in my-otube differentiation ( Figure 5A), which was confirmed by delayed myosin upregulation by Western blot, compared to parental and control shScramble C2C12 cells ( Figure 5A). Analysis of key cell cycle regulators in shITM2A-C C2C12 cells showed an early increase of p21 at day 3 and a lack of or delayed upregulation of CDK2, CDK4, CDK6, cyclin D1 and cyclin D3 compared to control shCONTROL cells ( Figure 5C). We next generated stable C2C12 myoblastic lines with reduced ITM2A expression using three different shRNA sequences. Puromycin-resistant shITM2A cells were analysed for knockdown of ITM2A. Cells stably expressing the shITM2A-B and -C sequences showed 21 and 55% reduction of ITM2A by qPCR, respectively, while shITM2A-A was ineffective ( Figure 5B). We continued our study with the shITM2A-C cells, which showed the best knockdown efficiency. C2C12 shITM2A-C cells presented a morphological delay in myotube differentiation ( Figure 5A), which was confirmed by delayed myosin upregulation by Western blot, compared to parental and control shScramble C2C12 cells ( Figure 5A). Analysis of key cell cycle regulators in shITM2A-C C2C12 cells showed an early increase of p21 at day 3 and a lack of or delayed upregulation of CDK2, CDK4, CDK6, cyclin D1 and cyclin D3 compared to control shCONTROL cells ( Figure 5C).
ITM2A Is as a Negative Regulator of Canonical Hh Signalling
Given that we observed a regulatory effect of ITM2A on PTCH1 expression level and non-canonical signalling in autophagy in epithelial cell lines, we next sought to investigate if ITM2A also affects canonical Hh signalling using a well-established GLI-luciferase activity assay in the Hh-responsive NIH 3T3 cells. Remarkably, the overexpression of ITM2A inhibited Gli-luciferase activation by Shh, an oncogenic SMO mutant (SMO-M2), Gli1 and Gli2 ( Figure 6A). The same effect was observed in Ptch1 −/− mouse embryonic fibroblasts (MEFs), which display high constitutive canonical Hh signalling activity due to loss of endogenous Ptch1, when overexpressing mouse Gli1 or Gli2 ( Figure 6B). Furthermore, ITM2A strongly reduced myc-GLI1 levels when expressed in NIH3T3 cells ( Figure 6C), or in HEK293 cells ( Figure 6D), a non-ciliated cell type in which GLI-dependent transcription can be stimulated in response to GLI1 overexpression. These findings suggest that ITM2A acts as a negative regulator of Gli stability and transcriptional activity, independently of its interaction with PTCH1.
non-canonical signalling in autophagy in epithelial cell lines, we next sought to investigate if ITM2A also affects canonical Hh signalling using a well-established GLI-luciferase activity assay in the Hh-responsive NIH 3T3 cells. Remarkably, the overexpression of ITM2A inhibited Gli-luciferase activation by Shh, an oncogenic SMO mutant (SMO-M2), Gli1 and Gli2 ( Figure 6A). The same effect was observed in Ptch1 −/− mouse embryonic fibroblasts (MEFs), which display high constitutive canonical Hh signalling activity due to loss of endogenous Ptch1, when overexpressing mouse Gli1 or Gli2 ( Figure 6B). Furthermore, ITM2A strongly reduced myc-GLI1 levels when expressed in NIH3T3 cells ( Figure 6C), or in HEK293 cells ( Figure 6D), a non-ciliated cell type in which GLI-dependent transcription can be stimulated in response to GLI1 overexpression. These findings suggest that ITM2A acts as a negative regulator of Gli stability and transcriptional activity, independently of its interaction with PTCH1.
Identification of ITM2A-Interacting Proteins by Mass Spectrometry
In order to better understand the role of ITM2A in Hh signalling and skeletal muscle differentiation, we overexpressed a tagged ITM2A protein in HEK293 cells and performed immunoprecipitation followed by mass spectrometry to identify potential interacting proteins. Cells expressing empty vector were used as a negative control to subtract non-specific contaminating proteins. ITM2A immunoprecipitates contained a number of proteins listed in Table 4.
STRING analysis reveals a highly significant clustering (p = 1 × 10 −16 ) of over 90% of the proteins identified (Figure 7), with the existence of sub-clusters related to ER protein quality control, mitochondria membrane protein translocation, nuclear import, and stress granules. ITM2A interaction with NUP93 and CSE1L could be involved in negative regulation of Gli-dependent transcription by the impairment of GLI1 and GLI2 nuclear import. Some other proteins identified play important roles in the recognition of defective mitochondria for clearance by mitophagy (prohibitin and prohibitin-2, ADACT3, HSPA1B). This snapshot of the ITM2A interactome suggests that it may function at ER-mitochondria contacts to regulate mitophagy, which could contribute to mitochondrial network remodeling during myogenic differentiation.
Discussion
In this study, we present evidence that the single-pass membrane protein ITM2A is a negative regulator of canonical and non-canonical Hh signalling. On the one hand, ITM2A physically interacts with PTCH1 and reduces its stability and biological function as an endogenous inhibitor of autophagy. On the other hand, ITM2A reduces GLI1 stability and inhibits GLI-transcriptional activity at a step downstream of PTCH1, impairing
Discussion
In this study, we present evidence that the single-pass membrane protein ITM2A is a negative regulator of canonical and non-canonical Hh signalling. On the one hand, ITM2A physically interacts with PTCH1 and reduces its stability and biological function as an endogenous inhibitor of autophagy. On the other hand, ITM2A reduces GLI1 stability and inhibits GLI-transcriptional activity at a step downstream of PTCH1, impairing canonical Hh signalling. Our findings refute our original hypothesis that ITM2A could be a mediator of PTCH1-induced autophagic flux reduction, suggesting instead that ITM2A's effect on autophagy are independent of PTCH1. The identification of ITM2A-interacting proteins by mass spectrometry provides the basis for testable hypotheses of the mechanism by which ITM2A acts as a negative regulator of Hh signalling and how it regulates myogenic differentiation, which will be explored in the future.
In the first part of our study, we confirm that ITM2A interacts with the CTD of PTCH1, and at least with another domain, since a PTCH1 mutant with deletions of the CTD and cytosolic middle loop, was able to co-immunoprecipitate ITM2A. This suggests that ITM2A likely interacts with PTCH1 through its intracellular 53 aa domain and the extracellular BRICHOS domain.
ITM2A and PTCH1 exert the same autophagy blocking phenotype characterised by an accumulation of autophagosomes [11,12]. However, our findings demonstrate that their interaction is not necessary for the autophagic flux blockade induced by PTCH1 in HEK293 or HeLa cells. In agreement with the lack of cooperative effect, the expression of ITM2A causes a direct or indirect degradation of PTCH1, while silencing of endogenous ITM2A results in a concomitant increase in PTCH1 levels. Even when the change in the expression of PTCH1 in the ITM2A knockdown cells was not statistically significant, the slight increase in PTCH1 expression was enough to enhance its inhibitory effect on autophagy. It is possible that the effects on the knockdown are not as marked as the effects of the overexpression because the endogenous levels of ITM2A in HEK293 and HeLa are low, and also because the efficiency of the knockdown was not 100%. This is supported by the observation that, when comparing the effects in the two cells lines, the results are statistically significant in HeLa cells, which presented a higher knockdown efficiency. In summary, our study suggests that ITM2A is not a necessary mediator of the autophagic flux blockade regulated by PTCH1, but instead reduces its activity, while still maintaining its capacity to block autophagy independently of their interaction.
While the effect of ITM2A over PTCH1 protein levels suggested that it could stimulate canonical Hh signalling, GLI-luciferase experiments show a clear inhibitory effect when the pathway is activated by upregulation of ligand (Shh), by an oncogenic SMO mutant, or by the overexpression of the main GLI family transcriptional activators, GLI1 and GLI2. This indicates that ITM2A exerts an inhibitory effect at the level of the GLI transcription factors. While the inhibitory mechanism is unknown, we speculate that interaction of ITM2A with NUP93 (a nucleoporin) and CSE1L (exportin) might regulate GLI2 ciliary trafficking and/or nuclear accumulation. NUP93 has been shown to localise at the ciliary base and regulate the permeability barrier [23]. CSE1L mediates the re-export of importin-α from the nucleus after importin substrates, which include GLI2, are released in the nucleoplasm [24,25].
The negative regulation of ITM2A on Hh signalling could play an important role during myogenic differentiation. It is well known that canonical Shh signalling maintains myoblasts and satellite cells in the proliferative state, when ITM2A levels are lower. Induction of myogenic terminal differentiation is accompanied by upregulation of ITM2A and a simultaneous decrease in autophagic flux. While our interactome analysis in HEK293 cannot be directly extrapolated to other cell types, it showed a subset of proteins that participate in the recognition of defective mitochondria for clearance by mitophagy (prohibitin and prohibitin-2, ADACT3, HSPA1B). This selective type of autophagy has been shown to play an important role during mitochondrial network remodelling in myogenic differentiation and could explain the regulatory role of ITM2A in C2C12 cells.
The process of myogenic differentiation requires the induction of muscle-specific genes, such as myosin, and irreversible cell cycle withdrawal [26,27]. Previous studies showed that the upregulation of cyclin D3, as opposed to cyclin D1 and cyclin D2, plays a key role in cell cycle withdrawal during myogenic differentiation [28,29]. Our results show that knockdown of ITM2A expression in C2C12 cells results in a delay in differentiation together with reduced levels of cyclin D3 expression. Therefore, it is possible that ITM2A regulates a process needed for upregulation of cyclin D3 during differentiation of C2C12 cells. Furthermore, ITM2A could regulate cell cycle progression through interaction with CDKN2A (also known as p16 INK4 ), as suggested by our mass spectrometry analysis. Given the positive role of ITM2A in C2C12 differentiation, it is possible that it can stimulate the function of this CDK4/6 inhibitor to accelerate cell cycle exit. Future studies will investigate if ITM2A is involved in stabilization of the main complexes involved in the maintenance of the cell cycle withdrawal required for the differentiation of myoblasts into myotubes. The potential physical interaction with CDKN2 suggested by the proteomics analysis is an attractive starting point to further investigate this process.
In conclusion, the results presented here support a negative role of ITM2A on Hh signalling and confirm its requirement during terminal differentiation of skeletal muscle.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
All data related to this study is presented in this manuscript; however, any reasonable request can be directed to N.A.R.-D.G. | 7,366.8 | 2021-08-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
The role of beam geometry in population statistics and pulse profiles of radio and gamma-ray pulsars
We present results of a pulsar population synthesis study that incorporates a number of recent developments and some significant improvements over our previous study. We have included the results of the Parkes multi-beam pulsar survey in our select group of nine radio surveys, doubling our sample of radio pulsars. We adopted with some modifications the radio beam geometry of Arzoumanian, Chernoff&Cordes (2002). For the $\gamma$-ray beam, we have assumed the slot gap geometry described in the work of Muslimov&Harding (2003). To account for the shape of the distribution of radio pulsars in the $\dot P-P$ diagram, we continue to find that decay of the magnetic field on a timescale of 2.8 Myr is needed. With all nine surveys, our model predicts that EGRET should have seen 7 radio-quiet (below the sensitivity of these radio surveys) and 19 radio-loud $\gamma$-ray pulsars. AGILE (nominal sensitivity map) is expected to detect 13 radio-quiet and 37 radio-loud $\gamma$-ray pulsars, while GLAST, with greater sensitivity is expected to detect 276 radio-quiet and 344 radio-loud $\gamma$-ray pulsars. When the Parkes multi-beam pulsar survey is excluded, the ratio of radio-loud to radio-quiet $\gamma$-ray pulsars decreases, especially for GLAST. The decrease for EGRET is 45%, implying that some fraction of EGRET unidentified sources are radio-loud $\gamma$-ray pulsars. In the radio geometry adopted, short period pulsars are core dominated. Unlike the EGRET $\gamma$-ray pulsars, our model predicts that when two $\gamma$-ray peaks appear in the pulse profile, a dominant radio core peak appears in between the $\gamma$-ray peaks. Our findings suggest that further improvements are required in describing both the radio and $\gamma$-ray geometries.
Introduction
Rotation-powered pulsars are the brightest class of γ-ray sources detected by the Compton γ-Ray Observatory (CGRO). The high-energy telescope EGRET made firm detections of pulsed γ-ray emission from five known radio pulsars (Thompson 2001), and possible detections from several others (Kanbach 2002). In addition, the high-energy pulsar Geminga may be radio-quiet, or at least radio-weak. EGRET also discovered more than 200 γ-ray sources (Hartman et al. 1999), most of which are still unidentified. However, several dozen new radio pulsars, out of more than 600 discovered since the end of the CGRO mission by the recent Parkes multi-beam pulsar survey (PMBPS) or in deep targeted observations (Lorimer 2003), lie within the error circles of EGRET sources. Although many of these are young, energetic pulsars, their identification as γ-ray pulsars must await observation with the next γ-ray telescopes, AGILE and GLAST.
In the meantime, population synthesis studies of radio and γ-ray pulsars can predict the number of radio-loud and radio-quiet γ-ray pulsar detections expected by different telescopes, assuming different models for radio and γ-ray emission. Even though current radio and γ-ray emission models have a number of outstanding uncertainties, the results of such studies can provide quite sensitive discrimination between models. In particular, polar cap and outer gap models of γ-ray emission make very different predictions of the number of radio-loud and radio-quiet γ-ray pulsars. Polar cap models (e.g. Daugherty & Harding 1996), where the high-energy emission region is located on the same open field lines as the radio emission, expect a large overlap in the radio and γ-ray emission beams and thus a higher ratio of radio-loud to radio-quiet γ-ray pulsars. On the other hand, outer gap models predict a smaller overlap between γ-ray and radio emission beams, because the high-energy and visible radio emission originate from opposite poles (Romani & Yadigaroglu 1995, Cheng et al. 2000, thus predicting more radio-quiet than radio-loud γ-ray pulsars. Thus, population synthesis can also address the question of how many EGRET unidentified sources are radio pulsars. Results of our initial study of pulsars in the Galaxy were presented by Gonthier et al. (2002). In this work, we evolved neutron stars from birth distributions in space, magnetic field strength, period and kick velocity, in the Galactic potential to simulate the population of radio pulsars detected in eight surveys of the Princeton catalog (Taylor, Manchester & Lyne 1993). A very simple model of radio and γ-ray beams was assumed, in which both were aligned with solid angle of 1 sr. Radio luminosity was assigned using the model of Narayan & Ostriker (1990) and γ-ray luminosity from the polar cap model of Zhang & Harding (2000). We found that agreement of the distribution of simulated radio pulsars with the observed distribution was significantly improved by assuming decay of the neutron star surface magnetic field on a time scale of 5 Myr. With these assumptions, EGRET should have detected 9 radio-loud and 2 radio-quiet γ-ray pulsars, and GLAST should detect 90 radio-loud and 101 radio-quiet pulsars (9 detected as pulsed sources). Because the radio and γ-ray beam apertures were assumed to be identical, radio-quiet γ-ray pulsars were those whose radio emission is too weak to be detected by the selected radio surveys.
There have been a number of new developments in both radio pulsar observation and analysis, and in γ-ray pulsar theory since we completed our initial population study. The PMBPS , Morris et al. 2002, Kramer et al. 2003) is nearly complete and has more than doubled the number of radio pulsars with measured period derivatives from 445 to nearly 1300. Determination of pulsar distances from dispersion measure has been greatly improved with the development of a new model by Cordes & Lazio (2002). New radio luminosity and beam models have been developed by Arzoumanian, Chernoff & Cordes (2002), which describe core and conal components of the emission and their dependence on period and frequency. Arzoumanian, Chernoff & Cordes (2002) have also derived a new two-component distribution of radio pulsar space velocities. A new polar cap γ-ray emission model has been developed by Muslimov & Harding (2003), in which radiation from pair cascades at high altitude along the edge of a slot gap forms a wide hollow cone of emission. In addition, the solid angle as well as the luminosity of the γ-ray beam is described in this model. This paper presents results of an expanded and updated pulsar population synthesis study that includes all of the above recent developments, as well as improved γ-ray sensitivity maps. By incorporating independent models for the radio and γ-ray beam geometry, we are now able to investigate how the beam geometry affects the observable characteristics of radioloud and radio-quiet γ-ray pulsar populations. We are also able to address the question of how many EGRET unidentified sources are expected to be radio-loud or radio-quiet γ-ray pulsars in the polar cap model. Of particular interest is the issue of how many of the new Parkes radio pulsars in EGRET error circles are counterpart γ-ray pulsars. In addition, we can make more accurate estimates of the numbers of radio-loud or radio-quiet γ-ray pulsars detectable by the AGILE and GLAST telescopes.
Radio Emission Geometry
We have adopted the geometry model for the radio emission beams as presented by Arzoumanian, Chernoff & Cordes (2002) (ACC from now on) with some slight modifications. We have assumed a core and a single conal component described by Gaussians with characteristic widths as follows ρ core = 1.5 • P −0.5 , and where the period, P , is in seconds and the frequency, ν, is in MHz. The characteristic core width is the width of the core beam at 1/e of the peak intensity. We have incorporated the radius-to-frequency mapping in the conal width developed by Mitra & Deshpande (1999); although they introduce elliptical shapes to the conal geometry, they find no compelling reason to abandon circular beams. The coefficient of 5.2 • above is chosen to give the same width at 400 MHz as in ACC.
For each simulated pulsar, the pulse profile is binned into 500 bins of the phase angle, φ, ranging from −π to π. Each bin is assigned a flux, s(φ, ν), consisting of the sum of the flux contributions from the core and cone components given by where i indicates core or cone, α i is the spectral index (S i ∝ ν α i with α i < 0), θ is the polar angle to the magnetic axis, f i (θ) is the angular distribution of the component flux, L i is the component luminosity and d is the distance to the pulsar. The relationship between the phase angle, φ, and the polar angle, θ, depends on the viewing geometry given by the expression cos θ = sin α sin ζ cos φ + cos α cos ζ.
During the simulation, the magnetic inclination angle, α, and the observer's line of sight angle, ζ, are chosen randomly between zero and π/2, accounting for emission from both poles. The difference between these two angles defines the impact angle, β = ζ − α.
We assume that the spectrum of each component has a low frequency cutoff of 50 MHz and can be modeled by a single power law, with spectral indices of α core = −2.1 and α cone = −1.6. It is general agreed that the spectra of cores are steeper that those of cones especially for short-period pulsars Manchester 1988 andRankin 1983). As discussed later in section 5, we have assumed constant spectral indices with a difference of 0.5 between the core and cone indices.
The angular distributions of the core and conal components are given by the Gaussians f core (θ) = 1 Ω core e −θ 2 /ρ 2 core , and (4) The solid angles for each of the components are chosen to normalize the Gaussian distributions describing the angular distribution of the flux in equation (5) when integrating over the polar angle, θ, the azimuthal angle, φ, and are given by the approximate expressions Ω core = πρ 2 core , and The width and radius of the annulus of the conal beam are given by These expression differ slightly from the ones used by Arzoumanian (private communication) given by the following forms: With the above definition of w e when θ = ρ cone , ρ cone represents the radius of the cone at half power. The half-angle coefficient of 3.9 • in equation (6) is the angle where the conal intensity peaks. This coefficient corresponds to that of the middle cone of the three conal components discussed by Mitra & Deshpande (1999). The phase angle, φ, is varied between −π and π and divided into 500 bins with the flux contributions of core and conal components evaluated and summed. In order to see if the simulated pulsar is detected, the averaged flux, S ave , of the pulse profile is then compared to the flux threshold, S min , of each survey at its corresponding frequency. If detected, the pulsar is flagged as radio-loud otherwise it is radio-quiet.
In the ACC study, pulsar surveys were all selected near a frequency of 400 MHz; hence they had no need to introduce any frequency dependence to the fluxes of the core and conal components. An important assumption made in ACC is that the ratio of the core to cone flux is given by r = 20 3 P −1 . In the present study, we have two groups of pulsar surveys. One group has frequencies near 400 MHz, while the other group has frequencies near 1400 MHz. Therefore, we have had to introduce the above frequency dependence to the spectra of the core and conal components with a ratio of the core to cone peak flux given by where this ratio is similar to the ratio used in ACC at a constant frequency of 400 MHz. With this ratio, short period pulsars will have their radio fluxes dominated by the core component with a weak conal component depending on the viewing geometry.
The luminosities of the core and cone components are expressed by , and where r/r o is the ratio of core to cone luminosities, and L is the total luminosity given by L = 3.4 × 10 10 P −1.3Ṗ 0.4 (mJy · kpc 2 · MHz).
This luminosity is reduced by a factor of 60 from the one used in ACC, as discussed in section 5. Under this assumption, radio pulsars are believed to be standard candles with well-defined luminosities in terms of only the period and period derivative whose exponents in the expression above for L come from parameters in Table 1 of ACC for the first model assuming a braking index n = 3 with two velocity components. There is no dithering of the luminosity as in the case of Narayan & Ostriker (1990). Here the random viewing geometry accounts completely for the required dithering when the beam and viewing geometries are not included. However as discussed later in the text in section 5, in order to obtain a resonable birth rate and adequate agreement between the distributions of the distance, flux and dispersion measure, we have had to reduce the radio luminosity used in ACC by a substantial amount.
Gamma-ray Emission Geometry
For the geometry of the γ-ray beam, we have adapted the emission from the slot gap described in Muslimov & Harding (2003). The slot gap (Arons & Scharlemann 1979) is a narrow region between two conducting boundaries, the last open field line and the pair formation front, extending from the neutron star surface up to the light cylinder. Since the electric field is relatively small in the slot gap, primary particles accelerate more slowly and pair cascades form at altitudes of several stellar radii above the surface. We model the radiation from these pair cascades as having two components, curvature radiation of the primary electrons and synchrotron radiation from the electron-positron pairs. We obtain the number of curvature, N CR , and synchrotron, N SR , photons emitted per primary particle by integrating the differential production rates over the available energy and over the pulsar phase angle φ. Since we have not included relativistic effects such as aberration and timeof-flight delays in modeling the γ-ray beam, the caustic peaks as found by Dyks & Rudak (2003) and in outer gap models (e.g. Romani & Yadigaroglu 1995) do not appear in our calculations.
Curvature Radiation
We assume that the curvature radiation takes place in the slot gap at the last open field line (see Figure 1) and is integrated from a lower γ-ray threshold, E γ , of 100 MeV to the curvature radiation critical energy, where E γ and E CR , are in m e c 2 units and having an angular distribution represented by the expression , is the curvature photon critical energy and ρ = r(1 + cos 2 θ) 3/2 3 sin θ(1 + cos θ) , is the radius of curvature with where r is the radial distance on the last open dipole field line corresponding to magnetic colatitude, θ, and θ γ = 3 2 θ is the photon emission angle, which is tangent to this field line. R is the neutron star radius taken to be 10 6 cm, r e is the Compton wavelength of an electron and γ o is the initial Lorentz factor of the particle given by where the first expression is for the case where the electric field is not screened by electronpositron pairs (Harding et al. 2002) and the second and third expressions are for cases where the electric field is screened by pairs in the unsaturated (I) and saturated (II) regimes (Zhang & Harding 2000). P is the period in seconds and B 12 is the magnetic field at the surface in units of 10 12 G. At a given inclination angle α, the line of sight angle, ζ, and phase angle, φ, define a polar angle, θ (through equation [3]), where the emission occurs tangent to the last open field line at a radial distance, r, from the stellar center, and the curvature emission rate per primary particle is given by equation (11) for dN CR /dΩ.
Synchrotron Radiation
The synchrotron radiation from cascade pairs takes place along the slot gap, beginning at an altitude R min , where the first pairs are produced, and continuing out to R SR , the maximum radius at which pairs are produced (see Figure 1). R SR is determined as the altitude where the curvature radiation critical energy, E CR , is equal to the photon escape energy, E esc (i.e. the minimum energy of photons capable of one-photon pair production). The pair-escape energy, E esc (r), in m e c 2 units, is given approximately by (see Zhang & Harding 2000or Harding 2001) The angles θ min and θ s are those corresponding to the radii R min and R SR along the field line at the edge of the slot gap, The corresponding photon emission angles are θ γ,min ≈ 3 2 θ min and θ γ,s ≈ 3 2 θ s . The parameter, R min /R is set to 3.5 in all of our simulations, fixing the beginning of the emission zone to be 2.5 stellar radii above the surface (Muslimov & Harding 2003). We assume that the electron-positron pairs in the cascade have a spectrum N ± (γ p ) = C p γ −p p , extending from a minimum γ min = E esc (R min )/2 to a maximum at γ max = E CR /2, where E esc (R min ) is the photon pair-escape energy at radius R min .
The integral photon spectrum above energy E γ of the synchrotron radiation from the electronpositron pairs with spectral index p, per primary particle, is E SR is the critical synchrotron energy, in m e c 2 units, of pairs at their maximum energy γ max and B ′ is the local field strength in units of the critical field 4.4 × 10 13 G. The pitch angle ψ of the pairs is assumed to be that of the parent photon direction with respect to the local field at the pair production point, or The spectral index of the pairs is given by p = 2α n − 1, where the spectral index of the photons, α n , is determined (Harding & Daugherty 1999, Zhang & Harding 2000 by the number of generations, n, of the pair cascade, where κ = 3/64 and α 1 = 5/3 for curvature radiation with losses. The cascade generation number is determined by E o , which is the curvature radiation critical energy at the initial colatitude angle θ min of the radiation zone.
We normalize the pair spectrum to the total cascade multiplicity so that the normalization factor C S for the synchrotron photon spectrum in equation (16) is given by The emission from the high altitude ( 2-4 stellar radii) cascades from the slot gap along the last open field line forms a broad, hollow-cone beam. The parameter representing the longitudinal thickness of the slot gap is expressed, in units of the polar cap half-angle θ min , as (Muslimov & Harding 2003) where B 12 is in units of 10 12 G. The acceleration-cascade simulations indicate that the width of the slot widens as the pulsar ages and saturates at a value of approximately 0.3. As seen in Figure 1, the interior and exterior polar angles of the radiation from the slot gap at R min are described by the following expression We take the average opening angle of the cascade radiation from the slot gap between r = R min and r = R SR as θ SG = (θ SG min + θ γ,s )/2. We approximate the angular distribution of the synchrotron radiation component of the entire cascade between θ min and θ s as a hollow beam with a conal Gaussian of width equal to which is the full width at 1/e of the maximum. The integral photon spectrum above energy E γ of the synchrotron radiation, per primary particle at a given polar angle is then given by The current of primary electrons in the slot gap that results in curvature and synchrotron radiation is limited to a fraction of the Goldreich-Julian current,ṅ GJ , in the following manneṙ This current multiplies the integral of he curvature and synchrotron emission per primary particle to give the total slot gap emission beam.
The total flux due to both curvature and synchrotron radiation are calculated for a given phase angle, φ, which is related to θ γ through equation (3), for a pulse profile with 500 bins of phase angle from −π to π. The average of the profile is obtained and compared to the appropriate instrumental flux threshold. If the average flux is above the threshold, the γ-ray pulsar is detected. This condition is tested independently of the radio flux and appropriate radio survey threshold, allowing us to designate the detected γ-ray pulsar as radio-loud or radio-quiet.
Monte Carlo Simulations
We discuss here some of the important changes that have been made to our Monte Carlo simulation code from our previous work in Gonthier et al. (2002). While we believe that is important to place the neutron stars at birth in spiral arms, we have not yet included the spiral arm structure into our simulations. As in Gonthier et al. (2002), we distributed pulsars at birth in the Galactic disk according to the prescription of Paczyński (1990). In a cylindrical coordinate system, the azimuthal angle, φ, is randomly chosen between 0 and 2π. The z distribution varies exponentially with distance from the plane, while the radial distribution peaks at 4.5 kpc and decreases exponentially from the center of the Galaxy. Given the initial position and velocity, the trajectory of each neutron star is evolved in the Galactic potential to the present.
Comparison group of pulsars in the ATNF catalog
In order to have a comparison group to normalize our simulation, we have selected pulsars from the Australian Telescope National Facility (ATNF) 1 . We chose pulsars within the Galaxy with periods greater than 30 ms and with positive period derivatives to obtain a comparison group of 978 pulsars detected by these nine surveys. Selecting pulsars with periods greater than 30 ms insures that we have left out of our group most of the millisecond pulsars that have been recycled in binary systems. We are not currently simulating this class of pulsars since their evolution is more complicated. We have also not included the anomalous X-ray pulsars, the soft-gamma-ray repeaters, or pulsars in globular clusters in our comparison group. We run the Monte Carlo simulation until the code detects the same total number of radio pulsars as have been observed with the group of surveys. With this normalization a neutron star birth rate is predicted as well as the number of γ-ray pulsars detected by various instruments. However, in order to obtain smoother simulated distributions, we run the code for ten times the number of pulsars detected by the radio surveys and then normalize accordingly.
Flux sensitivity of the Parkes multi-beam pulsar survey
We have included the eight radio surveys described in Gonthier et al. 2002 along with the new PMBPS having an angular coverage of | b |< 5 • and ℓ = 260 • to ℓ = 50 • with an assumed geometric efficiency of 100%. In Gonthier et al. 2002, we calculated the minimum radio thresholds, S min , for the selected group of radio surveys using the Dewey et al. (1985) formula. We attempted to use the same formula for the PMBPS using the parameters indicated in Manchester et al. (2001). However, we found that an additional factor of ∼ 2 multiplying the limiting sensitivity is required to reproduce the S min curves in Figure 2 of Manchester et al. (2001) as shown here in Figure 2. The Dewey formula under predicts the S min , and a more realistic treatment of narrow pulse widths (smaller duty cycles) in the Fourier search is not as optimistic as the Dewey formula (F. Crawford private communication). As a result, we chose to evaluate the S min for the PMBPS using an IDL code (Crawford private communication) that we translated into C++ to incorporate into our Monte Carlo code that is called event-by-event. This routine was used to create Figure 2 in Manchester et al. (2001) and reproduced here in Figure 2, along with the S min curves predicted by the Dewey formula, with the extra factor of 2, for the indicated DMs. In our simulations, we have then scaled the limiting sensitivities as discussed in Manchester et al. (2001).
Gamma-ray Thresholds
We simulate the γ-ray pulsars detected by EGRET, AGILE and GLAST. If the simulated γ-ray flux, obtained from the average flux in the pulse profile, is above a detector threshold, the pulsar is said to be a γ-ray pulsar detected by the corresponding instrument. We have included an all-sky sensitivity for both EGRET (I. Grenier, private communica-tion) and AGILE (A. Pellizzoni, private communication) and are shown in Figure 3a and 3b. For AGILE, we have three all-sky sensitivity maps representing the best, nominal and worst case scenarios. The one portrayed in Figure 3b is for the nominal case. For GLAST, we have used the following thresholds: in-plane (| b |< 10 • ) 5 × 10 −9 photons/(cm 2 · s), outof-plane (| b |≥ 10 • ) 2 × 10 −9 photons/(cm 2 · s) (D. Thompson private communication), and for pulsed emission 5 × 10 −8 photons/(cm 2 · s) (S. Ritz private communication, McLaughlin & Cordes 2000). The above threshold for pulsed emission detection in a blind periodicity search is based on techniques used in periodicity searches of EGRET data (Mattox et al. 1996, Chandler et al. 2001).
New distance model
We have incorporated the new electron density model of the Galaxy from Cordes & Lazio (2003) by calling the FORTRAN subroutines from within our code to calculate the dispersion measure (DM) of the simulated pulsar. The DM leads to a smearing of the pulse, affecting the flux threshold for radio detection. For comparison, we have recalculated the distance of the pulsars in the ATNF catalogue from the measured DM and the pulsars location using the new distance model. In Figure 4, we show the histogram for the logarithm of the absolute value of the difference between the distance obtained from the new distance model and the old distance model for our selected group of 978 pulsars. The distances obtained with the new distance model are about 20% smaller than those obtained with the old distance model. For pulsars in the catalogue whose best estimate distance is different than the one obtained using the old distance model, we have assumed that the distance was established by other methods and, therefore, is more reliable.
Initial period distributions
Recently various observations of young supernova remnants have been able to measure the speed of the expansion shell and the period and period derivative of the pulsar, thereby, determining the initial period of the pulsars. For example, X-ray Pulsars PSR J1811-1925 and PSR J0205+6559 (Gavriil et al. 2003) have been associated with the supernova remnants G11.2-0.3 and 3C 58, respectively, and may suggest that these pulsars where born with a period of ∼ 65 ms. In contrast to our previous study of Gonthier et al. (2002) that used a constant birth period of 30 ms, we studied Gaussian and flat initial spin distributions to describe the initial period. We found that the overall population statistics are not very sensitive to the initial spin distribution and only affects the short-period population of pulsars in theṖ −P diagram. While significant progress is being made in deducing the initial period of pulsars, the shape of the distribution is not well defined at the present. We have concluded that a flat distribution from 0 to 150 ms accommodates the observations and have used this distribution in this study.
Decay of the Magnetic Field
We continue to be steered in the direction of incorporating the decay of the magnetic field in order to achieve better comparisons. Originally, we included eight radio surveys in Gonthier et al. (2002) with 445 detected radio pulsars to compare to our simulated results. We used one Gaussian to describe the primary magnetic field distribution with a single decay constant. The PMBPS ) has discovered many more pulsars, many of which are young, distant pulsars with high radio luminosities. The current pulsar catalog now has 1412 radio pulsars (http://www.atnf.csiro.au/people/pulsar/catalogue). With the PMBPS, we have a selected group of 978 detected radio pulsars with many more high field pulsars. A single Gaussian would result in too many low field pulsars. In order to simulate these high field pulsars, we found it necessary to use two Gaussian distributions to skew the distribution towards high fields.
The pulsars surface magnetic field distribution at birth is represented by the sum of two log-normal Gaussian distributions expressed as where the parameters are indicated in Table 1. While there are two Gaussians describing the initial field distribution of the pulsars at birth, the second Gaussian with a higher mean field merely skews the distribution towards higher magnetic fields and does not necessarily suggest two groups of pulsars with different field characteristics. Using a single broader Gaussian would result in too many lower field pulsars.
The birth rate is assumed to be constant during the history of the Galaxy (at least back to 10 9 years in the past); therefore, we randomly select the age of the pulsar from the present to 10 9 years in the past. We assume a dipole spin down with a decaying magnetic field having a time constant, τ D , following Gonthier et al. (2002). In Figure 5, we present the period derivative versus the period of simulated pulsars with the indicated time constants from 10 8 to 5 × 10 5 years. Indicated in the figure are lines of constant field (calculated according to Shapiro & Teukolsky, 1983) and pulsar age assuming dipole spin-down of a constant field as well as the curvature radiation (CR) and nonresonant inverse Compton scattering (NRICS) death lines (Harding et al. 2002). Without field decay or a large decay constant like 10 8 years, pulsars will move from their short periods to longer periods along constant or nearly constant field lines and pile up near the NRICS line.
In order to populate, without field decay, the diagram in a region of small period derivatives (5×10 −18 ) and medium periods (0.5 s), many more short period pulsars would populate the lower left region of the diagram where none are observed (in Figure 8). Decreasing the decay constant, produces the upside-down pear shaped distribution seen in the distribution of detected pulsars and populates the high field region above 5 × 10 13 G. Unless one can alter the period and period derivative dependence of the radio luminosity significantly in a manner with more than just a simple power law, we find that field decay is required to reproduce the distribution. In subsequent simulations, we have adopted a value of 2.8 Myr for the decay constant.
Supernova kick velocity distribution
A number of studies disagree on the initial 3-D velocity distribution of neutron stars at their birth, possibly the result of an asymmetric supernova kick, typically described by a Maxwellian distribution. Lorimer, Bailes & Harrison (1997) obtained a velocity distribution with a mean velocity of ∼480 km/s similar to a previous study obtaining a mean of ∼450 km/s (Lyne & Lorimer 1994), yet significantly larger than most previous studies of pulsar statistics that required space velocities of ∼150 km/s. Hansen & Phinney (1997) concluded that a mean velocity ∼250-300 km/s best described their study. Though Hartman et al. (1997) did not use a Maxwellian distribution, they obtained a distribution with a mean velocity of 380 km/s. Gonthier et al. (2002) also did not use a Maxwellian distribution and found a distribution with a mean velocity of 170 km/s. It is clear that the velocity distribution that one obtains depends heavily on the many other assumptions that go into the model, such as the radio luminosity and radio beam geometry. The brighter the radio pulsars, the greater the distance at which they are detected, resulting in a broader distribution of distance, z, from the plane of the Galaxy, requiring a smaller mean velocity to improve the agreement with z distribution of detected pulsars.
In this study we have adopted the luminosity model of ACC, and so we must also adopt their kick velocity model. We chose to follow their two-component velocity distribution, which is Maxwellian in velocity with characteristic widths of 90 and 500 km/s and given by the equation (1) in ACC. In the ACC model, the two-component velocity model was preferred over the single component model. The fraction of the neutron stars with a width of 90 km/s is 40% and with a width of 500 km/s is 60%, leading to an average velocity of ∼540 km/s.
In Figure 6, we show the z distribution above the Galactic disk for the detected pulsars (shaded histogram) and for the simulated pulsars (regular histogram). Under the assumptions of the model, the predicted distribution is a little wider that the one for detected pulsars, having scale heights of 152 pc and 182 pc, respectively. We realize that many assumptions in our model are interrelated and decreasing the overall radio luminosity of the ACC model leads to a difference in the z distribution as pulsars are radio dimmer and must be closer to be detected. We have chosen to keep the velocity model and overall radio luminosity model of ACC making as few necessary changes to the ACC model as needed.
Reduction of the radio luminosity
Using the radio luminosity of ACC, we find that the simulated radio pulsars are too bright, with too many distant pulsars being detected and predicting a neutron star birth rate of 0.11 per century, with no γ-ray pulsars predicted to be detected by EGRET. In ACC all the pulsar surveys used in the study were at frequency near 400 MHz. In the set of surveys chosen for this study there are two groups with one having frequencies near 400 MHz and the other group with frequencies near 1400 MHz. Since the PMBPS S min is best accounted for in our simulation and this survey at 1374 MHz detected most of the pulsars observed in the Jodrell Bank 2 survey at 1400 MHz and the Parkes 1 survey at 1520 MHz, we selected only the PMBPS pulsars to represent the high frequency (HF) surveys, while the other selected surveys in our study near 400 MHz represent the low frequency (LF) surveys. Focusing primarily on the distributions of the pulsar distance, DM and flux at 400 MHz and 1400MHz, we first set the spectral indices (preserving 0.5 between core and cone indices) to give the same birth rate for these two frequency groups, then we set the over all luminosity to given a reasonable birth rate of ∼ 1.5 neutron stars per century. We obtain the same birth rate for each group with spectral indices of α core = −2.1 and α cone = −1.6. These spectral indices describe the primary spectra of the radio pulsars before the selection effects of the characteristics of the chosen radio surveys.
In our simulation, we calculate the fluxes at the frequency of each of the surveys in our selected group as well as at 400 and 1400 MHz by averaging the pulse profile for a given random viewing geometry for each pulsars. From the calculated S400 and S1400, we obtain a spectral index for each pulsar. In the simulation with all the surveys, we simulated 9780 (ten times the number in our select group of surveys to improve statistics) radio pulsars as detected by these surveys and find an average spectral index of -1.8 and a standard deviation of 0.2. Lorimer et al. (1995) measured spectral indices of 280 pulsars by measuring fluxes at radio frequencies between 408 and 1606 MHz with the distribution having a dependence on the characteristic age of the pulsar with a dependence, α = −1.7 + 0.2 log Ṗ /P , with a standard deviation of 0.6 with respect to this dependence. Maron et al. (2000) extended to lower and higher frequencies the study of Lorimer et al. (1995) and obtained an average spectral index of -1.8 with a standard deviation of 0.2.
To show the overall effect of the reduction of the radio luminosity used in ACC, we show in Figure 7 the comparisons of the distributions for the pulsar distance, flux at 1400 MHz and DM for 620 pulsars detected with only the PMBPS and an equal number simulated pulsars for this survey alone. Given that we are calculating the S min for the PMBPS according to the formulation used in Manchester et al. (2001), we believe that the S min for this survey is the best described in the group of our selected surveys, and with all pulsars detected at ∼1400 MHz sky temperature effects are minimized, reducing further uncertainties. Indicated in Figure 7 are the factors, f red , used to reduce the radio luminosity of the ACC model and the resulting distributions. Since the ACC model studied pulsar surveys at 400 MHz, the factor f red represents a reduction of the ACC 400 MHz luminosity by this factor. As the radio luminosity is reduced, the comparison of the distances and DMs improves, but disagreement increases between the distribution of the simulated and detected radio flux at 1400 MHz. There are significantly more pulsars simulated with lower radio fluxes than those detected by this survey, suggesting that perhaps certain aspects of the emission geometry are still not adequately described. We find that the shape of the flux distribution is not very sensitive to the features of the conal geometry, such as the radius and width. We also wanted to predict a reasonable neutron star birth rate per century, which varies from 0.6 at f red = 20 to 1.6 at f red = 80. We chose to reduce the radio luminosity by a factor of f red = 60 in subsequent simulations in the following figures compromising between good agreement of the distances and DMs and less desirable agreement of the fluxes. These factors predict a birth rate of 1.38 neutron stars per century for the case of all nine radio surveys.
Results
In Figure 8, we present the distribution of our select group of 978 detected pulsars in the ATNF pulsar catalog (8a) and the same number of simulated pulsars (8b) as a function of period derivative and period. The solid lines represent constant dipole magnetic field of 10 11−14 G, and the dashed lines correspond to the curvature and nonresonant inverse Compton scattering death lines. As dotted curves, we show paths in the diagram for 4 pulsars assuming a field-decay constant of 2.8 Myr, all with ages of 10 7 years, and with the indicated initial magnetic fields in B 12 units. We show the age lines assuming field decay as dot-dashed lines. Due to field decay, these lines are very different from the characteristic age lines, with the oldest pulsars being 10 Myr rather than 1 Gyr assuming no field decay. As indicated in Table 1, we chose to represent the initial magnetic field distribution with two Gaussians. The initial magnetic fields of the four evolutionary paths portrayed in Figure 8 were chosen to represent the means of the two Gaussians, the higher mean plus the high field component's width and the lower mean minus the lower field component's width. As a result, most of the simulated pulsars will lie within the lowest and highest paths. Due to a 2.8 Myr decay constant, the knee-like, curved portions in these paths begins before 1 Myr and become vertical after a few Myr. The paucity of pulsars in the two regions indicated with the circle and LB and HB for low and high field (Figure 8a) can be explained in terms of decay of the surface magnetic field on a time scale of 2.8 Myr. While it may be true that one can tailor a radio luminosity law to account for the observed distribution, field decay leads naturally to the funnel-shaped distribution of the detected pulsars.
In Figure 9, we present the comparisons between detected and simulated statistics for the indicated parameters. The shaded histograms represent the distributions of the detected pulsars while the regular histograms correspond to the model simulations. The model simulation over-predicts the number of pulsars with short periods, large period derivatives and larger distances. Improved comparisons can be obtained by making the radio luminosity not as strongly dependent on the period by decreasing the exponent from -1.3 to -0.9 and that of the period derivative from 0.4 to 0.3. The main discrepancy lies in the comparisons of the distance distributions and flux distributions ( Figure 5). The agreement in the flux distributions for only the PMBPS, shown back in Figure 5, is better with a reduction factor of f red = 20 making the pulsars more radio bright. However, the agreement between the distance distributions is significantly worse and the birth rate is too low. Perhaps the inability to find agreement in the radio flux and distance distribution may stem from our underlying assumption that radio pulsars are standard candles or may be indicating that spiral arm structure may have to be used for the birth location of neutron stars in order to improve the agreement between the simulated and observed distributions.
In Figure 10, we show the distributions in Galactic coordinates as Aitoff projections for 978 pulsars detected (10a) and simulated (10b). The strong contribution to the Galactic disk is due primarily to the PMBPS, adding nearly half of the total number of pulsars that are unique to this survey.
We also simulate the γ-ray pulsar detections by EGRET, AGILE and GLAST, using the assumptions discussed earlier, and independently simulate detection of radio pulsars. However, the γ-ray pulsars are flagged as radio-loud if their fluxes are higher than the minimum sensitivities of the select group of surveys; otherwise they are radio-quiet. Therefore, we can predict the number of radio-loud and radio-quiet γ-ray pulsars detected as point sources by each of the three instruments. For known radio pulsars, with measured periods and period derivatives, the γ-ray instrument can detect them as point sources and obtain a pulsed detection through reliable epoch folding. GLAST will have the sensitivity and ability to perform blind period searches and detect pulsation without a radio ephemeris. Table 2 indicates the simulated pulsar statistics for the radio-quiet and radio-loud γ-ray pulsars as detected by the three instruments. To improve the statistics, we run the simulation until the number of radio pulsars detected by the chosen set of radio surveys is equal to ten times the number of actual detected radio pulsars by those surveys. We then renormalize by dividing by a factor of ten. EGRET observed all of the radio pulsars that were detected by all the radio surveys excluding the PMBPS, out of which EGRET detected 8 γ-ray pulsars including Vela, the Crab, B1951+32, B1706-44, B1055-52, B0656+14, J1048-5832, and the radio-quiet γ-ray pulsar, Geminga plus a couple of other candidate pulsars bring the total to perhaps 12 (Kanbach 2003). Excluding the PMBPS, our simulation predicts that EGRET should have seen 15 radio-loud and 10 radio-quiet γ-ray pulsars and a neutron star birth rate of 1.46 per century. With all nine surveys, the predicted birth rate of 1.36 is 5% smaller and, therefore, the total number of γray pulsars has also dropped by 5% (GLAST) within statistics. The EGRET Third catalogue (Hartman et al. 1999) contains 170 unidentified point sources, some of which are expected to be pulsars. Sensitive searches performed with Chandra and the Parkes multi-beam telescopes have resulted in a few new pulsars within the error boxes of the unidentified EGRET sources (Halpern et al 2001;D'Amico et al. 2001). Correlating the positions of the radio pulsars detected in the PMBPS with the EGRET unidentified sources, Torres et al. (2001) found 14 positional coincidences. With the nearly completed PMBPS, Kramer et al. (2003) found about 38 positional coincidences, and they determined that 19 ± 6 are statistically likely to be real associations. So it would seem then that adding the PMBPS should convert radioquiet γ-ray pulsars into radio-loud γ-ray pulsars as detected by EGRET. This is clearly the case for GLAST and there is a significant conversion of radio-quiet to radio-loud pulsars for AGILE and EGRET when the PMBPS is added. Since our present simulation overpredicts the number of short period, energetic pulsars, mostly detected by the eight surveys (without PMBPS) prior to EGRET, our results are over-predicting the number of radioloud pulsars detected by EGRET from these surveys. However, the number of radio-loud pulsars predicted for all surveys including PMBPS is consistent with the number of plausible coincidences found by Kramer et al. (2003).
In Figure 11, we present the positions in theṖ − P diagram of the known γ-ray pulsars detected by EGRET (10a) and of those simulated for EGRET (10b), AGILE (10c) and GLAST (10d) where radio-loud γ-ray pulsars are shown as solid circles and radio-quiet γ-ray pulsars are shown as crosses. Younger pulsars have higher γ-ray luminosities that decrease as they approach the curvature death line where curvature radiation γ-rays can no longer produce electron-positron pairs. The γ-ray luminosity decreases significantly for older pulsars below the curvature radiation death line (Harding et al. 2002), where the main mechanism of pair production is via inverse Compton scattering of the thermal soft Xrays from the stellar surface. Pulsars below the nonresonant Compton scattering death line are unable to produce pairs and become radio-quiet. Our simulations predict that GLAST will detect 276 radio-quiet and 344 radio-loud γ-ray pulsars; a significant improvement over EGRET. With GLAST ability to perform blind period searches, we predict that out of the 276 radio-quiet γ-ray pulsars, 17 pulsars will be detected as pulsed sources. AGILE, scheduled to launch before GLAST, should detect 13 radio-quiet and 37 radio-loud γ-ray pulsars using the nominal sensitivity map. The best and worse case-maps predict 24 and 9 radio-quiet and 64 and 27 radio-loud γ-ray pulsars, respectively. As all-sky sensitivity and threshold maps are improved, these numbers will vary somewhat.
We present in Figure 12 the Aitoff projections of the γ-ray pulsars observed by EGRET and those simulated for EGRET, AGILE and GLAST. Most of the pulsars detected by GLAST will be young pulsars with ages ∼ 10 5 years and within 500 pc of the Galactic disk. The asymmetric distribution of radio-loud and radio-quiet γ-ray pulsars is a result of the PMBPS detecting radio pulsars from ℓ = 260 • to ℓ = 50 • , mostly on the right side of the figure.
In order to further explore the effect of the parameterization of the beam geometries of the radio and γ-ray emission, one has to study the pulse profiles and the correlation between the radio profile and the γ-ray profile. In Figure 13, we present a select pair of examples of simulated radio and γ-ray profiles for two radio-loud pulsars "detected" by EGRET with periods of 528 ms (13a) and 41 ms (13b). The two pulsars also have similar impact angles of 4.1 • (13a) and −5.5 • (13b). The radio profiles presented here are for 400 MHz and the γ-ray profiles are for greater than 100 MeV. The ACC model predicts ratios of the radio core-tocone flux of 13 ( Figure 13a) and 163 (Figure 13b), suggesting that the conal contribution to the flux is minor. However, the viewing geometry is important also. The pulse profile in the line-of-sight intersects the radio cone and the outer portion of the radio core, displaying two radio peaks, but only one curvature radiation γ-ray peak. Due to the large impact angle, the radio core does not significantly contribute to the overall profile as this is a fairly long period pulsar. The other pulsar in Figure 13b has a similar impact parameter but with a shorter period, therefore the radio core clearly dominates the profile and the two conal peaks from the γ-ray pulse are manifested. Since in this model the γ-ray emission originates within 2.5 stellar radii of the surface and the radio core emission is believed to come from similar altitudes above the stellar surface, there would be little aberration or time delay between them and they should appear correlated with the radio core peak being in phase with the single γ-ray peak, as in Figure 13a, or being in between the two γ-ray peaks, as in Figure 13b. On the other hand the radio cone beam is believed to arise from a higher altitude region and the effects of aberration and time delay, not included in our model, might shift the peaks to earlier phase relative to the γ-ray or radio core in the pulse profile. This effect is actually observed in a number of triple radio profiles, where the core component lags the center of the cone (Gangadhara & Gupta 2001). In the ACC model with core dominated short-period pulsars, we find that 54% of the EGRET radio-loud γ-ray pulsars have two γ-ray peaks in the pulse profile and are all core dominated, exhibiting a single (core) radio peak in the profile. The other 46% have a single γ-ray peak with a variety of radio profiles, including a single core peak, core and conal peaks, and two conal peaks. However, the fact that all two-peaked γ-ray profiles are core dominated is contrary to the few EGRET γ-ray pulsars, typically consisting of two γ-ray peaks with a single radio peak leading the γ-ray peaks (Thompson et al. 1997). This significant discrepancy between the observed and simulated radio and γ-ray profile correlations either questions the parameterization of the core to cone radio flux in the ACC model suggesting that these short period pulsars are different, having their profiles dominated by cone emission, or questions the polar cap γ-ray emission model we have used.
By contrast, several recently discovered, young X-ray pulsars show single broad peaks in their profiles and also single radio peaks in phase with the X-ray peaks (Lorimer 2003 and references therein). They also have very low radio flux and luminosity. have shown that the radio characteristics can be understood in the ACC model if we are viewing the edge of the cone beams at large impact parameter, so that the cone beam appears single and the core component is not seen. In the polar cap model emission geometry, the high-energy emission peak would be in phase with the single radio peak, as observed. A number of these are coincident with EGRET sources and AGILE may measure the γ-ray profiles.
Discussion
We have included radio and γ-ray beam geometries into our Monte Carlo code that simulates population statistics for radio and γ-ray pulsars, making predictions for the number of radio-quiet and radio-loud γ-ray pulsars detected by the instruments: EGRET, AGILE and GLAST. The radio beam geometry is tailored after the phenomenological model of ACC with slight modifications to include the radius-to-frequency mapping of Mitra & Deshpande (1999). In the ACC model, radio pulsars are assumed to be standard candles with their radio luminosity described by a simple power law in period and period derivative. The γ-ray beam geometry has been derived from the theoretical work of Muslimov & Harding (2003) describing the emission from the slot gap in the polar cap model. These enhancements are significant improvements over our previous studies of Gonthier et al. (2002). We have added the PMBPS to our select group of radio surveys, we have used the new distance model of Cordes & Lazio (2003), we have added all-sky threshold maps for EGRET (Grenier private communication) and AGILE (Pellizzoni private communication), and we have used realistic S min thresholds for the PMBPS (Crawford private communication). Neutron stars are assumed to be born with a constant birth rate, with a Galactic distribution described by Paczyński (1990) with a flat distributions in their initial period and a Gaussian distribution in their magnetic field, and their trajectories are evolved to the present within the Galactic potential (Paczyński 1990). We find better agreement when the magnetic field is assumed to decay exponentially with a decay constant of 2.8 Myr, which is somewhat shorter than the decay constant in Gonthier et al. (2002) that assumed an entirely different beam geometry and radio and γ-ray luminosity models.
In order to obtain agreement between observed and simulated radio pulsar distance distributions, a reasonable birth rate of 1.45 neutron stars per century and a reasonable number of γ-ray pulsars observed by EGRET, we find it necessary to significantly reduce by a factor 60 the overall radio luminosity of ACC model. The major problem with our radio emission model is that we are not able to simultaneously fit the distance distribution and the radio flux distribution as seen in Figure 5. We found that slight adjustments to the radio beam geometry do not affect these distributions. This inadequacy perhaps challenges some major assumptions of the model such as that radio pulsars are standard candles or that short period pulsars are core dominated. There is a lack of theoretical insight into the mechanism for the radio luminosity. Randomizing the radio luminosity about the expected value does indeed achieve some improvement in the flux distribution. The disagreement between the inferred and the simulated distance distributions may be a result of assuming a smooth distribution of neutron stars in the Galactic plane, rather than taking into account the spiral structure of the Galaxy. It is very apparent that most pulsars are detected in spiral arm regions.
Our simulation is normalized to the number of radio pulsars observed by the selected group of surveys and, therefore, a neutron star birth rate is predicted, as well as the number of γ-ray pulsars observed by each instrument as shown in Table 2. We expected by adding the PMBPS that the ratio of radio-quiet γ-ray pulsars to radio-loud γ-ray pulsars would decrease for EGRET given the positional coincidences found between the Parkes multi-beam radio pulsars within EGRET error boxes. Recently Kramer et al. (2003) estimated about 19 ± 6 associations, which suggests about 20 radio-quiet γ-ray pulsars detected by EGRET that become radio-loud when the PMBPS is included. The simulated ratio of radio-loud to radioquiet γ-ray pulsars decreases by 45% from 2.7 (with PMBPS) to 1.5 (without PMBPS) for EGRET. For AGILE there is also a similar decrease of 34% from 2.9 (with PMBPS) to 1.9 (without PMBPS), but very evident for GLAST as the ratio decreases significantly by 62% from 1.25 (with PMBPS) to 0.5 (without PMBPS).
Our model assumes that pulsars are evenly distributed azimuthally throughout the Galactic disk peaking radially at 4.5 kpc and falling off exponentially with distance from the center. Our model does not assume the spiral structure of the Galaxy where the pulsars are indeed born. The location of most of the detected pulsars is clearly correlated with the spiral arms of the Milky Way. Perhaps the difficultly of our model in reproducing the distance and flux distributions is a result of the lack of including this strong correlation of the location of pulsars and the spiral arms.
We believe that the fact that we cannot reproduce the decrease in the ratio of radioloud to radio-quiet γ-ray pulsars for EGRET points to a significant limitation of our overall model. The fact that we are not able to simultaneously account for the detected radio flux and inferred distance distributions and that the radio pulse profiles do not correlate with the γ-ray profiles as those detected by the EGRET, also suggest that we are not adequately accounting for everything. The problem with the pulse profiles perhaps implies that the parameterization of the ratio of radio core to cone with period in the ACC model does not apply for γ-ray pulsars. Recently Crawford, Manchester & Kaspi (2001) reported that six young pulsars have large linearly polarized profiles characteristic of conal emission. Yet in the ACC model, young pulsars with short periods would be characterized by significant core emission. In order to avoid the core component, the impact angle must be fairly large to observe a single peak from the edge of a conal beam. The pulsar J1105-6107 has a period of 0.063 s, and the ACC model predicts an integrated flux core to cone ratio of about 60 making it difficult to avoid seeing the core beam when a significant mean conal flux of 1.0 mJy is detected. Perhaps partial conal emission becomes dominant for short period pulsars as suggested by Manchester (1996).
The correlation between the radio and γ-ray beam profiles and the shapes of the profiles are sensitive to the geometry of the beams as well as the viewing geometry. In the ACC model that we have used in our simulations, short period pulsars have fluxes that are core dominated. All two-peaked γ-ray pulse profiles display a single radio core peak in the radio pulse profile. In order to see a cone dominated radio profile, the impact parameter must be so large that single-peaked γ-ray profiles are seen. This appears contrary to the features of the most of the γ-ray profiles detected EGRET and again raises the issue of whether short period pulsars are core dominated and perhaps are really partial cone dominated as suggested by Manchester (1996) and, more recently, by Crawford, Manchester & Kaspi (2001).
We conclude that a better radio beam model is required to account for the observed characteristics of radio pulsars. In addition, recent theoretical work by Dyks & Rudak (2003) and Muslimov & Harding (2004) indicates the importance of including the caustic component of γ-rays in the emission geometry. We hope that in the near future, we will be able to include more realistic emission geometries of the radio and γ-ray beams.
Acknowledgements
We would like to acknowledge the many conversations with Zaven Arzoumanian and the insight that he has given to us. We also thank an anonymous referee for suggesting many improvements to the manuscript. We appreciate the S min code for the Parkes multi-beam pulsar survey from Froney Crawford. We are grateful to Alberto Pellizzoni in providing us with the all-sky sensitivity maps for AGILE, as well as Isabelle Grenier in allowing us to use their EGRET all-sky sensitivity map. We express our gratitude for the generous support of Research Corporation (CC5813), of the National Science Foundation (REU and AST-0307365), and NASA Astrophysics Theory Program. Fig. 13.-Examples of radio and γ-ray pulse profiles for two radio-loud γ-ray pulsars simulated for EGRET having periods of 528 ms (a) and 41 ms (b) . | 13,701.8 | 2003-12-20T00:00:00.000 | [
"Physics"
] |
Clustering and Analysis of Dynamic Ad Hoc Network Nodes Movement Based on FCM Algorithm
Abstract— Clustering is a major exploratory data mining activity, and a popular statistical data analysis technique used in many fields. Cluster analysis generally speaking isn't just an automated function, but rather reiterated information exploration procedure or multipurpose dynamic optimisation Comprising trial and error. Parameters for pre-processing and modeling data frequently need to be modified until the output hits the desired properties. -Data points in fuzzy clustering may probably belong to several clusters. Each Data Point is assigned membership grades. Such grades of membership reflect the degree to which data points belong to each cluster. The Fuzzy C-means clustering (FCM) algorithm is among the most widely used fuzzy clustering algorithms. In this paper We use this method to find typological analysis for dynamic Ad Hoc network nodes movement and demonstrate that we can achieve good performance of fuzziness on a simulated data set of dynamic ad hoc network nodes (DANET) and How to use this principle to formulate node clustering as a partitioning problem. Cluster analysis aims at grouping a collection of nodes into clusters in such a way that nodes seeing a high degree of correlation within the same cluster, whereas nodes members of various clusters are extremely dissimilar in nature. The FCM algorithm is used for implementation and evaluation the simulated data set using NS2 simulator with optimized AODV protocol. The results from the algorithm 's application show the technique achieved the maximum values of stability for both cluster centers and nodes (98.41 %, 99.99 %) respectively.
Introduction
Analysis of the cluster oneself isn't a particular algorithm but the general problem that being resolved. This could be accomplished by different algorithms, which vary substantially in their understanding of what makes a cluster and how to identify them efficiently [1] [2]. Fuzzy c-means (FCM) clustering has gained significant awareness for its special features of many various fuzzy clustering techniques. Since it can aggregate several base clustering approaches with same object set in an unanimity solution; cluster community will have many attractive features like enhanced solution consistency, strong clustering, and information reuse. As for the weak partitioning solution, however, the process still needs to compute and store the product of matrix of membership, which includes complexity of space and time with size of quadratic data. Fuzzy clustering aims at obtaining a versatile partition, in which every element has membership in multiple clusters including values under it [0, 1]. FCM clustering algorithms are commonly used in low-dimensional data due to their performance and effectiveness. [3] [4].
FCM has many benefits, including simple implementation, relatively stable behaviour, information via multi -channel applicability as well as capability of modeling suspicious data; However, spatial-data information couldn't be treated effectively. The problem with data clustering, the system 's dynamic behaviour must be sufficiently excited to make data of the system rich enough [5]. FCM can be viewed as an advanced clustering strategy, allowing each data point to become part of several clusters with different membership degrees. The process works by decreasing the target function [6] [7]. The FCM clustering method was successfully implemented in feature analysis, classification model and clustering [8] [9].
The purpose of this work is to demonstrate how the fuzziness principle can be utilized to a dynamic ad hoc network node data set. To check the behavior of the simulated data set the algorithm of fuzzy c-means is implemented. In contrast to fast clustering, soft clustering patterns do not need to commit indefinitely to a center of cluster so this enables the objective function to escape from the local extreme. Our goal was not to propose clustering algorithm to show best results as for the functional cost, not one more effective one. Rather we aim to launch the basis for providing extra axiomatic and more naturalistic mechanism for data analyst to control the process of clustering for the context of data exploring. The main stimulus is to offer a collection of rational clusters from the appropriate ones. Even so, the results of experiments indicate that biased algorithm not only appears to significantly increase the successive runs but also appears to produce, on average, reasonable clusters with better index values for validity. We also found less sensitivity to initialization. Such results are known as an appropriate shrinking side effect.
We identify some problems for DANET node movement such as: It is hard to quantify the exploitation as a node moves along the path at different velocities in various directions. The node position is also difficult to find. Higher overhead computing occurs for node movement analysis. The need of node clustering and analysis, node stability is predicted for better performance by proper positioning techniques.
Our contributions are: i. Building a proposed system using NS2 to identify and monitoring a range of active nodes on communication system of mobility nodes in DANETs utilizing optimized routing protocol Ad hoc On-Demand Distance Vector AODV and generating traffic and mobility sceneries to generate essential information about position of node between nodes, its direction, and speed of each node as csv file represents a data set collection step. ii. Reducing the dimension of data and extracting the position information as a low dimensional data set input for clustering and analysis using FCM algorithm to provide a more precise computation of the cluster membership by doing a full inverse-distance weighting, finding minimizers of objective function so it has been used successfully for DANET clustering applications like military, civilian, and commercial areas. this technique achieves more efficient accuracy (stability) in the result and reduces the time taken for data and/or information extracting from large dataset with the use of continuous values of nodes, as they incorporate more information that could potentially improve subsequent analysis. Improving stability and decreasing the overhead will increase the efficiency of the network. iii. Compared to other algorithms, the network stability is increased.
The work organized as the following sections: In Section 2, Related Works are presented. In section 3 the Mathematical Model of Fuzzy C-mean are described. Section 4 explains the detailed of the Methodology of the work. Section 5 describes the Experiment results and Discussion. Finally, Section 6 outlines the Conclusions and Future Works.
Using FCM technique
There have been several proposals for the election of FCM algorithm in Ad-hoc networks. A Fuzzy Cluster Mean (FCM)-based clustering approach is proposed for WSNs by W. Zhenhua and et al. This is a new clustering method first to create the clusters, and afterwards pick the head (CH) of the cluster. This FCM-based strategy has strong features of fast clustering, reducing the consumption of energy and being implemented in various modes of transmitting data. In simulations the accuracy and feasibility are verified. The energy efficiency is demonstrated to be higher than equivalent clustering algorithms [10]. Instead, R. Dutta and et al. introduced a new lowenergy adaptive, unequal clustering protocol using Fuzzy c-means in wireless sensor networks (LAUCF), an unfair size of clustering paradigm for network arrangement based on the FCM clustering algorithm which could makes energy dissipation more uniform within the headnodes of the cluster, hence rising the lifespan of the network [8]. G. Abdulsahib and et al. investigate the consequence of using the ad hoc network clustering technique, so how will the strategy improve savings on resources and reduce time delays. Also, it explains the clustering, structure of cluster, method of cluster linking, and various algorithms used for selection of cluster head and their impact on MANETs. Additionally, the study measures effectiveness of MANETs to demonstrate the impact of clustering on ad-hoc networking efficiency, thus a cluster-based routing protocol (CBRP) has been used that will be viewed one of the routing proto-cols for clustering. The CBRP is evaluated by comparing to other routing protocol types, like AODV, DSR, and DSDV, and none of them has used the clustering technique. Results are inferred and analyzed to illustrate the benefits and drawbacks of using clustering in MANETs [11]. While J. Gu and et al introduce new clustering algorithm, named sparse learning based fuzzy c-means (SL_FCM). Initially, in order to decrease the computational sophistication of the SR-based FCM process, much of the energy of the discriminating function acquired by resolving an SR model should be reserved and the rest is removed. In this method a few repetitive data (i.e. similarity between samples from the various groups) can also be eliminated in the discriminant feature, which could also increase the efficiency of the clustering. In addition, the current location of valid values is also used as a discriminant feature to redefine the distance between the sample and the clustering center in SL FCM to further enhance the clustering performance. The weighted distance in SL FCM will enhance sample similarities from the same class and sample differences from different classes, thus enhancing the clustering effect. However, as the dimensions of each sample 's stored discriminant feature differ, they utilize set operations to define the distance and cluster center in SL FCM [4]. New fuzzy c-means (FCM) clustering algorithm was introduced with random projection by M. Ye and et al... empirical tests prove how the new algorithm not just retains precision of the classic clustering with FCM, but its more too effective clustering than original and clustering with decomposition of single values. Around the same time a new method of cluster ensemble with random projection based on FCM clustering is suggested. The innovative agglomeration technique could be effective to calculate a spectral embeddings of data with representation based on centers of cluster, which scales with data size linearly [3]. D. Wang and et al. suggest an original two-phase approach to designing and characterizing fuzzy rule-based models in which the FCM algorithm begins with fuzzy sets [12]. A proposed method is named as FCM-Q LEACH-VANET presented by T. Mamatha and P. Aishwarya. Road side unit (RSU) link to base station ( BS) was performed by using IEEE 802.11p protocol to minimize the transfer time from Source (SN) to BS [13].
Using other techniques
Also there have been several proposals for using clustering technique in Ad-hoc networks. Z. Y. Rawashdeh and S. M. Mahmud develop a new clustering technique suitable for the VANET highway setting, for improving network topology stability. This technique utilizes the difference in speed as a parameter to establish a relatively stable arrangement of clusters. A new multimetric algorithm for cluster-head elections was also developed [14]. While C. Konstantopoulos and et al. introduce a novel algorithm for the clustering. The basis of the algorithm is a scheme that precisely predicts each mobile host 's mobility based on its neighborhood stability. This knowledge is then used to build each cluster from hosts that will remain neighbors for long enough, ensuring clusters that are highly resistant to host mobility are created. They use demonstrably good knowledge theoretical techniques to estimate potential host mobility, which allow on-line learning of a robust probabilistic model for current host mobility [15]. Also, Y. Zhang and et al. propose a distributed group mobility adaptive (DGMA) clustering algorithm for mobile ad hoc networks ( MANETs) based on a revised group mobility metric, linear distance based spatial dependency (LDSD), derived from the linear movement distance of a node instead of its instantaneous velocity and direction [16]. S. R. Valayapalayam Kittusamy and other authors Present a novel Cluster Structure and Cluster Head (CH) Election Algorithm suitable for VANETs. The proposed adaptive weighted clustering protocol (AWCP) groups the random nodes, and then the optimal CH is achieved through network parameter optimisation. An advanced algorithm called the enhanced whale optimization algorithm (EWOA) is implemented for optimization purposes. For each vehicle in a trusted clustering model its movement is analyzed with identified speed and position by the vehicle network mobility routing protocol. The AWCP-EWOA model being proposed analyzes the distance between the trusted vehicle node and RSU [17]. An improved multi-parameter weighted clustering algorithm called TCWCA has been developed by L. Shan, L. Zhang to improve the clustering efficiency of wireless networks. . The algorithm takes into consideration the calculation factors of node degree, internode distance, node mobility, node energy, and clustering process association increase in network topology. Additionally, the algorithm applies increments of the regional topology association of network nodes to the weighting factor calculation process [18]. Instead, M. Ren and et al. are developing a new dynamic mobility and stabilitybased clustering system for urban area scenarios. The scheme proposed applies the moving path of the vehicle, the relative location and the lifetime estimate relation [19]. S. Pathak, S. Jain had suggested a prioritized weighted clustering algorithm for MANETs that operates on three strategies for reducing updates in the cluster head and cluster overhead. The first step helps to decide the cluster head dynamically based on their initial priorities within the cluster of neighbor nodes. The second and third step help to automatically pick the new cluster head in cluster maintenance and without delay when the remaining battery power of the old cluster head exceeds the minimum threshold value or in the absence of cluster head [20]. Also, as they before having presented an optimized stable clustering algorithm, which would provide greater network stability by minimizing cluster head changes and reducing overhead clustering. A new node is added in the proposed algorithm which acts as a backup node within the cluster. Such a backup node serves as the head of the cluster, when the real head of the cluster steps out (or died). Later the head of the cluster re-elects a new node for backup [21]. A new algorithm based on weight, proposed by A. Karimi and others not only use their own features to assess the node 's weight, but also consider the direct effect of adjacent node feature. This specifies the weight of virtual node links, and the impact of weights on the final weight of the node. By using this technique, the maximum weight is allocated to the best choices for being the heads of the cluster and the collection of nodes increases in accuracy [22]. M. Chatzidakis, with S. Hadjiefthymiades propose a clustering scheme that produces clusters that are sufficiently stable to allow confidence information to propagate, initially through the cluster and, eventually, through the entire network. They also suggest a trust scheme that assigns and updates a trust value to each network node, thus exposing the malicious nodes and disseminating the knowledge across the network [23]. P. Basu and et al. suggest a distributed clustering algorithm, MOBIC, based on the use of this mobility measure for cluster head selection, and show that this results in a more stable cluster creation than the model of the well-known Lowest-ID clustering algorithm [24]. Henceforth an appropriate technique should be planned that easily responds to the changes in topology. Initially, the VANET clustering problem (CP) is developed into a problem of dynamic optimization. Next, it is proposed that an optimization algorithm called Vehicular Genetic Bee Clustering (VGBC) based on honeybee algorithm and genetic algorithm properties solve the CP in VANETs. Individuals (bees) represent a practical clustering structure in VGBC, and their fitness is calculated based on load balance and stability. M. Ahmad and others are proposing a methodology that fuses the properties of the genetic algorithm and the honey bee algorithm [25]. As they suggested before, honeybee algorithm-based clustering generates clusters efficiently with less resources such as usage of energy and bandwidth. A node is selected as a cluster head based on degree of node, behavior of the neighbor, direction of mobility, speed of mobility and remaining energy. The proposed technique inspired by the foraging behavior of honey bees gives effective and stable cluster forming due to the productive existence of the bees and the consideration of the maximum parameter [26]. K. A. Awan and others present a trust-based clustering framework that enables clusters to recognize a trusted CH. The novel features incorporated in the proposed technique include trust-based selection of CH which includes a node 's knowledge, reputation and experience. A backup head is often calculated by an analysis of the trust of each node in a cluster. The main advantage of using clustering trust is detecting malicious and compromised nodes. Recognition of these nodes helps eliminate the possibility of invalid knowledge [27]. Considering the high efficiency of clustering methods among the routing algorithms S. A. Sharifi and S. M. Babamir produce a new clustering method and considering fine performance of Evolutionary Algorithms (EAs) in finding suitable head clusters, they present a different EA-based method called ICA (Imperialist Competitive Algorithm) by numerical coding. By thinking about particular conditions of a MANET and predicting the direction of mobility of nodes, they avoid further reclustering leading to reduced overload [28]. While a novel approach of dynamic clustering mechanism with nondeterministic simulated (DCHA) technique is implemented by R. Sundar and A. Kathirvel to avoid loss of path and to create a stable connection between the source and the destination. Dynamic clusters are arranged along the path based on the minimum local and the minimum global of mobile nodes [29]. M. Ni and et al. are proposing a mobility prediction-based clustering (MPBC) scheme for high mobility node ad hoc networks where a node can alter the associated cluster head (CH) many times during its connection lifetime. The suggested clustering framework involves an initial clustering process and a cluster maintenance phase. The Doppler shifts associated with regularly exchanged Hello packets among neighboring nodes are used to estimate their relative velocities, and the estimation results are used in MPBC as basic information. The nodes with the smallest relative mobility in their neighborhoods are selected as CHs in the initial clustering stage. Mobility prediction techniques are implemented in the cluster maintenance stage to manage the various problems caused by node motions, such as potential association losses to current CHs and CH position changes, to increase the lifespan of the connection and provide more stable clusters. An analytical model is constructed to determine the upper and lower limits of the average lifespan of the connection and determine the MPBC average rate of change in the relationship [30].
Fuzzy C-mean (Mathematical Model)
Start stating the concept of fuzzy c-means clustering problem, then precisely explain algorithm of FCM clustering [3].
Definition 1: "The fuzzy c-means clustering problem". A set of data on points have been given with features labeled by × matrix X, is a positive integer considered as the number of clusters, while > 1 is the fuzzy constant, then find the partition matrix U ∈ R × and centers of clusters centers V = {v1, v2, . . ., v }, such that: Herein, ‖ ⋅ ‖ indicates norm, ordinarily Euclidean norm. The partition matrix item indicate to membership of point in the cluster . Furthermore, for each ∈ [1,n], The objective function is defined as: First, FCM algorithm calculates membership degrees by distances among cluster centers and points, then updates each cluster center according to the degree of membership. A solution is achieved through the computation of cluster centers and the iterative partition matrix. It would always be remembered that the clustering FCM only achieved a locally optimal solution, as well as initialization effect on the final outcome of clustering [3][4][8] [13][5] [6] . Detailed FCM clustering method is shown as below.
Methodology
Our system design scheme for typological analysis for dynamic Ad Hoc network (DANET) nodes movement involves three stages: Stage 1-Data collection: Collection of data is obtained as the basic data for NS2 simulator as described in the next subsection. Typically, a high-dimensional dataset is created to obtain enough and rich of system information. Data clustering could be viewed such a "coarse" modeling based on information (or data structure) derived from data mines. The dynamic behaviour of the system should be sufficiently excited to data-clustering problem so that the data from the system are sufficiently abundant.
Stage 2-Dimension reduction: High-dimensional dataset, however, can result in the various processing system working inefficient, or ineffective. It is therefore beneficial firstly to reduce data dimension to a reasonable size, retain just as much as the original beneficial information as possible then feed the reduced dimension data into the clustering system.
Stage 3-Data clustering:
Realizing the data clustering feature using FCM, to derive main spatial distribution aspect of spatiotemporal dynamic systems. FCM would be used for data analysis to identify the intrinsic spatial distribution nature of the spatiotemporal dynamic systems as described in subsection 4.2.
Simulator and data set
In this paper, a simulation system is required to build the proposed system as a dynamic ad hoc network (DANET) and collect data set as csv file. Network Simulator version two (NS2) is employed to simulate this system. Topology, mobility, traffic model, simulation behaviors has established at NS2. Nodes behaviors will design to generate real world of mobility nodes in DANET. The mobility model is increasingly dependent on different parameters of nodes as described in Table 1. Nodes parameters are the starting point of the location for mobile node, and node speed, velocity and direction modified a long time. Sensitive information, control data, notification messages, cooperative awareness messages and warning messages at the transport layer are based on UDP/TCP agent. These agents have the capability to generate the environment of nodes communication. some parameters of communication are sited, such as traffic and packet size. For establishing the traffic model for DANET, TCP transport protocol is employed. The mobility and traffic models are established to represent patterns of communication and movement of DANET. Hence, these models set their acceleration, velocity, location, and speed modified over time. Routing protocol utilized to facilitate communication within the network in the mobility and traffic model of network. So, it is essential to use Ad Hoc On Demand Distance Vector (AODV) routing protocol, some files of the AODV routing protocol must be modified in order to broadcast essential information about position of node between nodes, its direction, and speed of each node. CSV file of NS2 is considered one of the most important output of the simulation systems of DANET as a collected data set file. The design steps of the simulated system are shown in Figure 1.
Fig. 1. A simulated system design steps
However, high-dimensional dataset has collected in the csv output file. So, first we reduced the dimension of the data keeps as much of useful original information such as position information to be the input data set to the clustering processing system.
Clustering by using FCM
Well after reduction stage, the original high dimensional dataset assumed to be D reduced into a low dimensional dataset S = {s1, s2, . . ., sN}, with Sj = (sj1, . . ., sjp) where sj will be the jth pattern (1 ≤ j ≤ N), sji be the ith feature of the jth pattern-(1 ≤ i ≤ p.), the patterns number N, p (p < P) such as P the number of original dimensionality features, and p the number of features with reduced dimensionality.
FCM presented in Section 3 will be executed directly on S. Utilizing three FCM steps mentioned above, we get the matrix of fuzzy-partition U = [μkj]c×N as well as matrix of cluster center V = [v1, . . . , vc ], and finally, find grouping of data. It should also remember the U and V initialization can affect the results of the clustering. The algorithm can be formulated as a sequence of iterations under the required conditions U and V. An appropriate number of clusters upper bound must be selected Cmax. Also, the fuzziness factor m > 1 and the termination criteria must be set.
Following the initialization step; the algorithm proceeds successively with the partition matrix and prototype updates until a predefined termination criterion has been reached. The algorithm will halt while neither major updates in the partition matrix, nor changes in the prototypes, nor cost function improvements.
The following are the algorithmic procedure of FCM clustering technique to obtain the results as shown in the experimental section:
Experiment Results and Discussion
The objectives of the experiment are to demonstrate the proposed clustering technique as being able to provide various rational clusters in different granulations and perspectives, as well as to display how the principle of fuzziness can be applied to DANET data sets. FCM has been implemented as a tool to apply and check data set behaviour. The cluster centers and membership matrix are updated within each iteration. Initial cluster centers are randomly selected to allow various outcomes to be obtained and not to be stuck to a certain one. The number of iterations in the case of divergence reaches the maximum permissible iterations' number and the algorithm is terminated.
Popular metrics are used to performance evaluations of clustering algorithms like cluster center stability, changes of cluster center, cluster stability average, node stability, changes of node, average node stability etc. These are quite generic terms with DANET specifications that are needed to provide consistency between various nodes.
The interconnection and effects between nodes should be analyzed and presented as well. Test scenarios with different node movement patterns should be provided.
The initial parameters for testing the results of centers and nodes behaviors as follows: Number of centers= 4, m= 2, number of patterns (for dynamic nodes) =1818 (for 9 nodes), number of features= 2 (x, y position coordinate values) for each pattern, time (1...n) n=200 s the time of simulation.
For http://www.i-joe.org Table 2. Results of the first iteration with the initial centers values After the first iteration, ten times of iteration implemented to obtain the best results and reach the maximum values of stability for both cluster centers and nodes (98.41 %, 99.99 %) respectively at the last one as shown in Tables 4,5 So, to conclude the theoretic findings we introduce our clustering approach with the aim of increasing the stability of the network topology and making it less dynamic. This approach takes the position of nodes into consideration during the clustering process.
The stability occurs when the cluster centers is unable to perform the desired responsibilities or it leaves the cluster. The cluster centers' lifetime is a very significant aspect to examine the stability of the mechanism because if a cluster centers remains the head of a particular cluster for a long interval of time. So, we find the using of FCM algorithm provides more precise computation of the cluster membership by doing a full inverse-distance weighting, finding minimizers of objective function so it has been used successfully for DANET clustering applications. This technique achieves more efficient accuracy (stability) in the result. It will enhance the stability as well as reduce the computational cost caused by performing numerous computations to select centers after a short span of time (number of iterations) because it computes and updates the centers implicitly in precise and fast computation so we do not need another technique to improve the process. Improving stability and decreasing the overhead will increase the efficiency of the network.
We have evaluated our proposed method for accuracy (stability criteria) and efficiency (by comparing to other methods).
However, we will perform extensive comparisons between existing clustering algorithms and our work to evaluate the performance (cluster stability) as shown in Table 6.
Conclusion and Future Works
The following points can be concluded from experimental results: i. After ten times of iteration implemented, we have the best results and reach to the maximum values of stability for both cluster centres and nodes (98.41 %, 99.99 %) respectively. ii. Algorithm is superior with experiment in terms of stability convergence it will enhance the stability as well as reduce the computational cost caused by performing numerous computations to select centers after a short span of time (number of iterations). iii. The FCM algorithm is successful in applying the fuzziness concept on DANET.
iv. The behaviour of DANET's fuzzy c-means is proportionate to those of conventional numerical vector fuzzy c-means. v. From the experimental results, the most appropriate value of m was observed to be within the range: 2≤m<3. vi. The principle of fuzziness is implemented, and the results obtained from the algorithm are given more significance and easier interpretation. vii. The results from the presented method rely fundamentally on m, initial state, and the similarity /dissimilarity metric used to compute distance among DANET nodes and centres. viii. Besides small values of m close to 1.1, the algorithm 's behaviour is almost identical to hard clustering, it can be noticed from the matrix of membership. ix. Rising factor m helps to increase the iterations number required to find a solution.
It could be clarified as in range 2≤m<3, beside the high rate of variance among the values of a feature, the fuzzy memberships degrees allocated at the level of event to feature as well as pattern to cluster increases. x. The structure of the DANET nodes and the similarity between the values of the feature raises iterations' number of required to achieve a solution.
As we see in this study, FCM is presented as a major technique to analyze the behavior and find stable clustering topology for DANET nodes movement. So for future work suggesting precise mathematical model of clustering result considering uncertainties is necessary for the better performance, finding a suitable initialization method for clusters; because Spatio-temporal dynamics data contain the spatial ordering feature, an appropriate strategy of initialization will be addressed to achieve better performance; such as determining optimal cluster numbers. Also, trail of real DANET movement of participating nodes in a restricted geographical area may be the first step towards the real-life testing of this technology.
Authors
Sumaya Hamad is member of the College of Computer Science and Information Technology (CSIT) as Assist. Teacher at University of Anbar, Anbar, Iraq. received the B.Sc. (good) (first class) degree in computer science from University of Anbar, in 2002, and the M.Sc. degree in computer science from University of Anbar, in 2012. She is currently PhD student in Computer Science Department at University of Technology, Baghdad, Iraq. She has published 5 refereed journal and conference papers. Her current research interests include mobile computing, artificial intelligence, Ad Hoc networks, search engines, and information technology. Email<EMAIL_ADDRESS>Dr. Yossra H. Ali is Assistant Professor. She received her B.Sc., M.Sc. and PhD degrees in 1996, 2002 and 2006 respectively from Iraq, University of technology, department of Computer Sciences. She Joined the University of Technology, Iraq in 1997. During her postgraduate studies she worked on Computer Network, Information systems, Agent Programming and Image Processing, she has some experience in Artificial Intelligent and Computer Data Security, She Reviewer at many conference and journals, she supervision of undergraduate and postgraduate ( PhD. and MSc.) dissertations for many students in Computer sciences, she has a number of profes- | 7,138.6 | 2020-10-19T00:00:00.000 | [
"Computer Science"
] |
Associations between Metabolic Syndrome and Bone Mineral Density and Trabecular Bone Score in Postmenopausal Women with Non-Vertebral Fractures
Medical, social, and economic relevance of osteoporosis is caused by reducing quality of life, increasing disability and mortality of the patients as a result of fractures due to the low-energy trauma. This study is aimed to examine the associations of metabolic syndrome components, bone mineral density (BMD) and trabecular bone score (TBS) in menopausal women with non-vertebral fractures. 1161 menopausal women aged 50-79 year-old were examined and divided into three groups: A included 419 women with increased body weight (BMI 25.0-29.9 kg/m2), B – 442 females with obesity (BMI >29.9 kg/m2)i and C – 300 women with metabolic syndrome (diagnosis according to IDF criteria, 2005). BMD of lumbar spine (L1-L4), femoral neck, total body and forearm was investigated with usage of dual-energy X-ray absorptiometry. The bone quality indexes were measured according to Med-Imaps installation. All analyses were performed using Statistical Package 6.0. BMD of lumbar spine (L1-L4), femoral neck, total body, and ultradistal radius was significant higher in women with obesity and metabolic syndrome compared to the pre-obese ones (p<0.001). TBS was significantly higher in women with increased body weight compared to obese and metabolic syndrome patients. Analysis showed significant positive correlation between waist circumference, triglycerides level and BMD of lumbar spine and femur. Significant negative association between serum HDL level and BMD of investigated sites was established. The TBS (L1-L4) indexes positively correlated with HDL (high-density lipoprotein) level. Despite the fact that BMD indexes were better in women with metabolic syndrome, the frequency of non-vertebral fractures was significantly higher in this group of patients. Keywords—Bone mineral density, trabecular bone score, metabolic syndrome, fracture.
I. INTRODUCTION
STEOPOROSIS and metabolic syndrome are important public health problems, due to the decreasing of quality and reducing life expectancy of patients as a result of lowtrauma fractures in a case of osteoporosis and the possibility of cardiovascular, endocrine and other complications in a case of metabolic syndrome development [1], [5].The frequency of V. Povoroznyuk is with the D.F. Chebotarev Institute of Gerontology of NAMS of Ukraine, Kyiv, 04074 Ukraine (phone: 38-097-3734189; fax: 38-044-4304174; e-mail: okfpodac@ukr.net).
Lar. Martynyuk and Lil.Martynyuk are with State Higher Educational Institution "I.Horbachevsky Ternopil State Medical University of Ministry of Health of Ukraine" (e-mail: l_martynyuk@yahoo.com,lili_marty@ukr.net)Syzonenko is with Kyiv City Center for radiation protection of citizens affected by the Chernobyl disaster, Kyiv, Ukraine (e-mail: irynasyzonenko@rambler.ru).
both diseases increases with age of patient and duration of menopausal period as a result of slowdown in metabolism and estrogen deficiency development [8], [11].
Traditionally, osteoporosis is diagnosed according to the history of low-energy fractures or the results of BMD (Tscore), which are determined by using X-ray densitometry [10]; however, BMD provides only 70-75% of bone strength [13].Other factors that affect it include state of cortical bone macro-geometry and trabecular bone micro-architecture, presence damages and cracks in it, which can be calculated by index TBS, patented by MED-I maps (m.Bordeaux, France) in 2006 [3].In our opinion, evaluation of TBS is important to perform this work.
Scientists paid much attention to the study of the relationships between metabolic syndrome and osteoporosis.Abdominal obesity, high glucose (as a result of insulin deficiency or insulin resistance), high triglycerides and low high density lipoproteins which are the main components of metabolic syndrome have significant impact on bone tissue and fractures development but published research results are contradictory [4], [7], [12], [14], [16].The discrepancy of opinions prompted this investigation.
The aim of our study was to evaluate the relationships between metabolic syndrome components and BMD, TBS in postmenopausal women with low-trauma non-vertebral fractures.
The Statistical Package 6.0 ©StatSoft, Inc. was used for analyses.Continuous variables were reported as mean ± SD.Pearson correlations examined the relationship between continuous variables, significance set at p<0.05.
BMD of lumbar spine (L1-L4) was significantly higher in patients of groups B and C without fractures (Table I).BMD of femoral neck was significantly lower in female with obesity and non-vertebral fractures (Table II).BMD of total body and ultradistal radius significantly better in all groups of women without fractures compared to patients with non-vertebral fractures (Tables III and IV).TBS (L1-L4) was significantly higher in patients without fractures in the groups of women with increased body weight and obesity (p<0.05)(Table V).
The analysis of the metabolic syndrome laboratory components (serum triglycerides and HDL indexes) was carried out.We established significantly higher triglycerides level (A -1.049±0.381g/cm 2 , B -1.030±0.322g/cm 2 ; C -1.605±0.703g/cm 2 ; F=162.669;p<0.001) and significantly lower HDL level (A -1.531±0.372g/cm 2 , B -1.509±0.314g/cm 2 ; C -1.170±0.256g/cm 2 ; F=126.832;p<0.001) in patients with metabolic syndrome.There was no difference of triglycerides level in female with non-vertebral fractures and without them in all investigated groups (Table VI).The level of HDL was significantly lower in patients with non-vertebral fractures and metabolic syndrome (Table VII).In analysis of metabolic syndrome components, the waist circumference component was positively associated with BMD of lumbar spine and femur (Fig. 1).The study reveals significant positive correlation between serum triglycerides level and both investigated BMD sites (Fig. 2).A number of investigators have suggested relationship in accordance with our own findings [2].It was found a significant positive correlation between HDL serum level and TBS and inversely association with BMD of lumbar spine and femur (Fig. 3).
We calculated the percentage of non-vertebral fractures in anamnesis (Fig. 4).
Low-trauma non-vertebral fractures occurred in 14.6% female with increased body weight, 17.4% of women with obesity and 21.3% of patients with metabolic syndrome.Fig. 4 Frequency of low-trauma vertebral fractures in women with increase bode weight (A), obesity (B) and with metabolic syndrome (C) Significant differences were not found in the frequency of non-vertebral fractures in the groups of women with obesity and increased body weight or metabolic syndrome (X2=1.312,p>0.05 and X2=1.780, p>0.05, respectively), but it was significant in the groups of pre-obese female and patients with metabolic syndrome (X2=5.590,p<0.05).The similar results were found by other investigators [9].
IV. CONCLUSION
Menopausal women with obesity and metabolic syndrome have a significantly higher BMD at all measured sites compared to females with pre-obesity.TBS is significantly lower in women with non-vertebral fractures and increased body weight or obesity.A significant positive correlation is established between waist circumference, triglycerides level and BMD of lumbar spine and femoral neck.Correlation between HDL level and BMD at all levels is significant and negative.At the same time, it is positively associated with TBS indexes.There is no significant difference in frequency of low-trauma non-vertebral fractures in the groups of preobese and obese women.At the same time, the incidence of osteoporotic non-vertebral fractures is significantly higher in female with metabolic syndrome in compared to other patients.Metabolic syndrome may not protect from any type of fractures, but future investigations are necessary. | 1,558.2 | 2018-03-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Global Attractor of Thermoelastic Coupled Beam Equations with Structural Damping
Copyright © 2017 Peirong Shi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this paper, we study the existence of a global attractor for a class of n-dimension thermoelastic coupled beam equations with structural damping utt +Δ2u+Δ2ut − [σ(∫Ω (∇u)2dx) + φ(∫Ω ∇u∇utdx)]Δu+f1(u) + g(ut) + ]Δθ = q(x), in Ω×R+, and θt −Δθ+ f2(θ) − ]Δut = 0. Here Ω is a bounded domain of RN, and σ(⋅) and φ(⋅) are both continuous nonnegative nonlinear real functions and q is a static load.The source terms f1(u) and f2(θ) and nonlinear external damping g(ut) are essentially |u|ρu, |θ|θ, and |ut|rut respectively.
Introduction
This problem is based on the equation which was proposed by Woinowsky-Krieger [1] as a model for vibrating beam with hinged ends.Without thermal effects, Ball [2] studied the initialboundary value problem of more general beam equation subjected to homogeneous boundary condition.Ma and Narciso [3] proved the existence of global solutions and the existence of a global attractor for the Kirchhoff-type beam equation without structural damping, subjected to the conditions In fact, the plate equations without thermal effects were studied by several authors; we quote, for instance, [4][5][6][7][8].
In the following we also make some comments about previous works for the long-time dynamics of thermoelastic coupled beam system with thermal effects.
Giorgi et al. [9] studied a class of one-dimensional thermoelastic coupled beam equations and gave the existence and uniqueness of global weak solution and the existence of global attractor under Dirichlet boundary conditions.Barbosa and Ma [10] studied the long-time behavior for a class of two-dimension thermoelastic coupled beam equation In addition, we also refer the reader to [11][12][13][14][15] and the references therein.
A mathematical problem is the nonlinear -dimension thermoelastic coupled beam equations with structural damping which arise from the model of the nonlinear vibration beam with Fourier thermal conduction law: with the initial conditions (, 0) = 0 () , (, 0) = 1 () , (, 0) = 0 () (10) and the boundary conditions To the our best knowledge, the existence of global attractor for thermoelastic coupled beam equations was not considered in the presence of nonlinear structure damping.Here the unknown function (, ) is the elevation of the surface of beam; 0 () and 1 () are the given initial value functions; the subscript denotes derivative with respect to and the assumptions on nonlinear functions (⋅), (⋅), 1 (⋅), 2 (⋅), (⋅), and the external force function () will be specified later.
Under the above assumptions, we prove the existence of global solutions and the existence of a global attractor of extensible beam equation system ( 8)- (11).And the paper is organized as follows.In Section 2, we introduce some Sobolev spaces.In Section 3, we discuss the existence and uniqueness of global strong solution and weak solution.In Sections 4 and 5, we establish the result of the existence of a global attractor.
Basic Spaces
Our analysis is based on the following Sobolev spaces.Let Then for regular solutions we consider the phase space In the case of weak solutions we consider the phase space In 0 we adopt the norm defined by
The Existence of Global Solutions
Firstly, using the classical Galerkin method, we can establish the existence and uniqueness of regular solution to problem (8)- (11).We state it as follows.
Theorem 7.Under assumptions ( 1 )-( 6 ), for any initial data ( 0 , 1 , 0 ) ∈ 1 , then problem ( 8)-( 11) has a unique regular solution (, ) with Proof.Let us consider the variational problem associated with ( 8)-( 11): find for all ∈ 2 0 (Ω) and ∈ .This is done with the Galerkin approximation method which is standard.Here we denote the approximate solution by ( (), ()).We can get the theorem by proving the existence of approximation solution, the estimate of approximation solution, convergence, and uniqueness.In the following we give the estimates of approximation solution and the proof of uniqueness of solution.
Estimate 1.In the first approximate equation and the second approximate equation of (25), respectively putting = () and = () and making a computation of addition and considering σ() = ∫ 0 () and f1 () = ∫ 0 1 (), by using Schwarz inequality, and then integrating from 0 to < , we see that With the estimates 1-2 and 4-5, we can get the necessary compactness in order to pass approximate equation of (25) to the limit.Then it is a matter of routine to conclude the existence of global solutions in [0, ].
Uniqueness.Let (, ), (V, θ) be two solutions of ( 8)-( 11) with the same initial data.Then writing = − V, = − θ and taking the difference (25) with = , = and = V, = θ and respectively replacing , by , and then making a computation of addition, we have where Using Mean Value Theorem and the Young inequalities combined with the estimates 1-2 and 4-5, we deduce that for some constant Then from Gronwall's Lemma we see that = V, = θ.The proof of Theorem 7 is completed.
Theorem 8.Under the assumptions of Theorem 7, if the initial data ( 0 , 1 , 0 ) ∈ 0 , there exists a unique weak solution of problem ( 8)- (11) which depends continuously on initial data with respect to the norm of 0 .
Proof.By using density arguments, we can obtain the existence of a weak solution in 0 .Let us consider { 0 , 1 , 0 } ∈ 1 .Since 1 is dense in 0 , then there exists We observe that for each ∈ , there exists ( , ), smooth solution of the initial-boundary value problem ( 8)-( 11) which satisfies where 0 is a positive constant independent of ∈ .
Defining , = − , Z, = − θ : , ∈ , following the steps already used in the uniqueness of regular solution for ( 8)- (11), and considering the convergence given in (34), we deduce that there exists (, ) such that From the above convergence, we can pass to the limit using standard arguments in order to obtain Theorem 8 is proved.
where is a constant depending on the initial data in different expression.
In addition, in this paper, denotes different constant in different expression.
The Existence of Absorbing Set
The main result of an absorbing set reads as follows.
Theorem 13.Assume the hypotheses of Theorem 8; then the corresponding semigroup () of problem ( 8)- (11) has an absorbing set B in 0 .
then (70) can be rewritten as Using Nakao's Lemma 11, we conclude that As → ∞, the first term of the right side of (74) goes to zero; thus, with Ẽ(), we conclude is an absorbing set for () in 0 .
The Existence of a Global Attractor
The main result of a global attractor reads as follows.
Proof.We are going to apply Lemmas 11 and 12 to prove the asymptotic smooth.Given initial data ( 0 , 1 , 0 ) and ) in a bounded invariant set ⊂ 0 , let (, ), (V, θ) be the corresponding weak solutions of problem ( 8)- (11).Then the differences = − V, = − θ are the weak solutions of where where Let us estimate the right hand side of (79). | 1,752.8 | 2017-01-01T00:00:00.000 | [
"Mathematics"
] |
mGPT: Few-Shot Learners Go Multilingual
This paper introduces mGPT, a multilingual variant of GPT-3, pretrained on 61 languages from 25 linguistically diverse language families using Wikipedia and the C4 Corpus. We detail the design and pretraining procedure. The models undergo an intrinsic and extrinsic evaluation: language modeling in all languages, downstream evaluation on cross-lingual NLU datasets and benchmarks in 33 languages, and world knowledge probing in 23 languages. The in-context learning abilities are on par with the contemporaneous language models while covering a larger number of languages, including underrepresented and low-resource languages of the Commonwealth of Independent States and the indigenous peoples in Russia. The source code and the language models are publicly available under the MIT license.
Introduction
The advent of the Transformer architecture (Vaswani et al., 2017) has facilitated the development of various language models (LMs; Liu et al., 2020a).Although the wellestablished "pretrain & finetune" paradigm has led to rapid progress in NLP (Wang et al., 2019), it imposes several limitations.Finetuning relies on an extensive amount of labeled data.Collecting highquality labeled data for new tasks and languages is expensive and resource-consuming (Wang et al., 2021).LMs can learn spurious correlations from finetuning data (Naik et al., 2018;Niven and Kao, 2019) and demonstrate inconsistent generalization, catastrophic forgetting, or brittleness to finetuning data order (McCoy et al., 2020;Dodge et al., 2020).Last but not least, finetuning requires additional computational resources and, therefore, aggravates the problem of a large carbon footprint (Bender et al., 2021).
The latest approaches address these limitations with zero-shot and few-shot learning, performing a task with LM scoring or conditioning on a few demonstration examples without parameter updates (Brown et al., 2020).Autoregressive LMs adopted via these paradigms have been widely applied in many NLP tasks (Schick and Schütze, 2021;Perez et al., 2021), notably in crosslingual knowledge transfer (Winata et al., 2021) and low-resource language scenarios (Lin et al., 2022).However, model development for underrepresented typologically distant and low-resource languages (Wu and Dredze, 2020;Lauscher et al., 2020;Hedderich et al., 2021) and cross-lingual generalization abilities of autoregressive LMs (Erdem et al., 2022) have been left understudied.
This paper presents mGPT, a multilingual version of GPT-3 (Brown et al., 2020) available in 1.3B (mGPT 1.3B ) and 13B (mGPT 13B ) parameters.We aim to (i) develop a large-scale multilingual autoregressive LM that inherits the GPT-3's generalization benefits and (ii) to increase the linguistic diversity of multilingual LMs, making the first attempt to address languages of the Commonwealth of Independent States (CIS) and under-resourced languages of the small peoples in Russia.We pretrain mGPT in 61 languages from 25 language families on Wikipedia and Colossal Clean Crawled Corpus (C4; Raffel et al., 2020).We analyze the mGPT's performance on various intrinsic and extrinsic tasks and compare it with the contemporaneous generative LMs.multiple tasks and prominent language modeling abilities on the languages of the small peoples in Russia, (iii) adding more demonstrations may result in performance degradation for both mGPT and XGLM, and (iv) hate speech detection is one of the most challenging tasks, receiving random guessing performance in the zero-shot and few-shot evaluation setups.External validation by the NLP community since the release1 shows that mGPT 1.3B can outperform large-scale LMs on SuperGLUE tasks and promote strong solutions for multilingual clause-level morphology tasks.We release the model evaluation code 2 , the mGPT 1.3B 3 and mGPT 13B 4 models.We hope to facilitate research on the applicability of autoregressive LMs in non-English languages and increase the linguistic inclusivity of the low-resource languages.
Related Work
Multilingual Transformers Recent years have featured the development of various monolingual and multilingual LMs initially designed for English.BERT (Devlin et al., 2019) has been replicated in other high-resource languages (Martin et al., 2020;Masala et al., 2020) and language families, e.g., Indian (Kakwani et al., 2020) and Balto-Slavic (Arkhipov et al., 2019).Massively multilingual LMs -mBERT, XLM-R (Conneau et al., 2020), RemBERT (Chung et al., 2021), mBART (Liu et al., 2020b) and mT5 (Xue et al., 2021) -have now pushed state-of-the-art results on various NLP tasks in multiple languages (Kalyan et al., 2021).Such models support more than 100 languages and vary in the architecture design and pretraining objectives.By contrast, our work presents one of the first multilingual autoregressive LMs covering more than 61 languages.
GPT-based Language Models Large-scale generative LMs (e.g., GPT-3;Brown et al., 2020) are triggering a shift from the "pretrain & finetune" paradigm to prompt-based learning (Liu et al., 2023a).The benefit of balancing the pretraining costs and performing standardized NLP tasks with a few demonstration examples has stimulated the development of open-source autoregressive LMs for English (e.g., Black et al., 2022;Biderman et al., 2023;Dey et al., 2023), Chinese (Zeng et al., 2021), and Russian (Zmitrovich et al., 2023).A few contemporaneous works extend the research on zero-shot and few-shot learning, evaluating the in-context abilities of GPT-based LMs in multilingual scenarios.Winata et al. (2021) report that English GPTs perform significantly better than random guessing with monolingual and multilingual prompts on typologically close languages, such as French, Spanish, and German.Lin et al. (2022) propose XGLM, a multilingual GPT-style LM in 30 languages, and empirically show that it can outperform its monolingual counterparts of the comparable number of parameters.We use XGLM as the main baseline in our experiments and analyze the results of comparing mGPT 1.3B with other autoregressive LMs published after our release, such as BLOOM (Scao et al., 2023).
Pretraining Data
Language Selection Table 1 summarizes the list of languages by their family.The pretraining corpus consists of a typologically weighted set of languages covered by cross-lingual benchmarks, such as XGLUE (Liang et al., 2020) and XTREME (Hu et al., 2020).The motivation behind the language choices is to narrow the gap between the high-resource and low-resource languages (Ducel et al., 2022).To this end, we include 20 lan- guages from the tail of the C4 language list, the list of underrepresented languages of Russia, and the official and resource-lean CIS languages (Orekhov et al., 2016).
Data Preparation Pipeline Pretraining extensive
LMs requires large volumes of high-quality data.Despite the explosive growth of web corpora resulting in the pretraining data volume of up to 6T tokens (Xue et al., 2021), the data quality is often unsatisfactory (Kreutzer et al., 2022).General approaches to maximizing the quality are based on manually curated heuristics (Yang et al., 2019b), the perplexity of LMs (Wenzek et al., 2020), and data quality classifiers (Brown et al., 2020).Our data preparation pipeline includes data collection, deduplication, and filtration.
Data Collection
The pretraining corpus represents a collection of documents from Wikipedia and C4.The Wikipedia texts are extracted from the dumps (v.20201101) with WikiExtractor (Attardi, 2015).The C4 data is downloaded using the Tensorflow datasets 5 (Paper, 2021).
Deduplication The text deduplication includes 64-bit hashing of each text in the pretraining corpus 5 tensorflow.org/datasets/catalog/c4for keeping texts with a unique hash.
Filtration We follow Ortiz Suárez et al. ( 2019) on the C4 data filtration.We also filter the documents based on their text compression rate using zlib6 .The most strongly and weakly compressing deduplicated texts are discarded.The compression range for an acceptable text is empirically defined as ×1.2 -×8.The texts with an entropy of less than 1.2 contain code junk and entities, while those of more than 8 contain repetitive segments.The next step includes distinguishing between low and high-quality documents with a binary classifier.The classifier is trained with Vowpal Wabbit7 on the Wikipedia documents as positive examples and the filtered C4 documents as negative ones.The remainder is cleaned by a set of language-agnostic heuristics.The size of the pretraining corpus is 46B (Wikipedia), and 442B UTF characters (C4), resulting in 600GB.Figure 1 shows the total number of tokens for each language, and the total number of documents in the pretraining corpus is presented in Figure 2.
Tokenization
The design of the tokenization method may have a significant impact on learning efficient representations, model memorization, and downstream performance (Mielke et al., 2021;Nogueira et al., 2021;Pfeiffer et al., 2021;Rust et al., 2021).We investigate the effect of the tokenization strategy on the model perplexity.We pretrain five strategy-specific versions of mGPT 163M on a Wikipedia subset of the pretraining corpus.The tokenization strategy is selected based on their perplexity on a held-out Wikipedia sample (approx.10.7MB), which is inferred as Equation 1.
where t is an input text, |t| is the length of the text in tokens, |c| is the length of the text in characters.The perplexity is normalized over the number of characters since the tokenizers produce different numbers of tokens for t (Cotterell et al., 2018).
Tokenization Strategies We considered five tokenization strategies incorporating specific representations of uppercase characters, numbers, punctuation marks, and whitespaces.Table 2 presents examples of the tokenization strategies.
• DEFAULT: BBPE (Wang et al., 2020); • CASE: Each uppercase character is replaced with a special token <case> followed by the corresponding lowercase character; • ARITHMETIC: The CASE strategy combined with representing numbers and arithmetic operations as individual tokens; • COMBINED: The ARITHMETIC strategy combined with representing punctuation marks and whitespaces as individual tokens; • CHAR: Character-level tokenization.Pretraining Details The models are pretrained on 16 V100 GPUs for 600k training steps with a set of fixed hyperparameters: vocabulary size of 100k, context window of 2048, learning rate of 2e −4 , and batch size of 4.
Results
The experiment results are presented in Table 3.The DEFAULT model achieves the best results, outperforming the rest of the models by up to 2.5 of perplexity score.Based on this experiment, we select the DEFAULT strategy to pretrain the mGPT 1.3B and mGPT 13B models.
Model Architecture
The mGPT architecture is based on GPT-3.We use the architecture description by Brown et al., the GPT-2 code base (Radford et al., 2019) from HuggingFace (Wolf et al., 2020) and Megatron-LM (Shoeybi et al., 2020).Table 4 presents the description of the GPT-2 and GPT-3 architectures of comparable sizes.With all the other hyperparameters equal, GPT-3 has fewer layers (Layers: 48 vs. 24) but a larger hidden size (d model : 1600 vs. 2048) as opposed to GPT-2.GPT-3 also alternates the classic dense and sparse attention layers (Child et al., 2019).
Model Pretraining
The pretraining procedure mostly follows Brown et al..We utilize the DeepSpeed library (Rasley et al., 2020) and Megatron-LM (Shoeybi et al., 2020).We pretrain our LMs with a total batch size of 2048 and a context window of 512 tokens.The total number of the training steps is 600k, and the models have seen 400B tokens during pretraining.The pretraining took 14 days on a cluster of 256 V100 GPUs for mGPT 1.3B and 22 days on 512 V100 GPUs for mGPT 13B .We report the computational, energy, and carbon costs in §7.2.
Language Modeling
Method We estimate the language modeling performance on the held-out sets for each language.
Here, perplexity is computed as described in §3.2, except that perplexity is normalized over the length of the input text t in tokens |t|.We also run statistical tests to analyze the effect of linguistic, dataset, and model configuration criteria: • Language script: we divide the languages into two groups by their script -Latin and others (e.g., Cyrillic and Arabic) -and use the Mann-Whitney U test (Mann and Whitney, 1947) to analyze the perplexity distributions in the groups.
• Pretraining corpus size: we calculate the Pearson correlation coefficient (Pearson, 1895) to analyze the correlation between the language perplexity and the number of documents in this language in the pretraining corpus.
• Model size: we use the Mann-Whitney U test to analyze the effect of the model size.
Results by Language Figure 3 presents the perplexity scores for each language on the held-out sets.The mGPT 13B model achieves the best perplexities within the 2-to-10 score range for the majority of languages, including Dravidian (Malayalam, Tamil, Telugu), Indo-Aryan (Bengali, Hindi, Marathi), Slavic (Belarusian, Ukrainian, Russian, Bulgarian), Sino-Tibetan (Burmese), Kipchak (Bashkir, Kazakh) and others.Higher perplexities up to 20 are for only seven languages from different families.The mGPT 1.3B results have similar distribution but are consistently higher than mGPT 13B .
Results by Language Family Analyzing results by the language family (see Figure 4), we find that mGPT 13B shows consistently lower perplexities as opposed to mGPT 1.3B .Specifically, mGPT 1.3B underperforms mGPT 13B on Basque, Greek, Kartvelian, and Turkic families.Correlation Analysis We present the results in Table 5.We observe that the language modeling performance depends on the language script and model size.In particular, the non-Latin languages receive lower scores on average, while mGPT 13B performs better than mGPT 1.3B in this setting.However, the positive correlation between the pretraining corpus size and perplexity in particular languages can be attributed to the low diversity of the text domains in the pretraining monolingual corpora for the low-resource languages.Such corpora contain Wikipedia articles on a limited amount of general topics; therefore, the model learns the distribution in the corpora without being able to generalize well.In general, the results align with Scao et al. ( 2023), who report that the considered criteria can affect the knowledge acquired by BLOOM 1B and BLOOM 176B .
Downstream Evaluation
We conduct an extrinsic evaluation of mGPT and baselines on classification and sequence labeling tasks in zero-shot and few-shot settings.In the zeroshot setting, the model is shown a test example formatted as a prompt in natural language, while in the few-shot setting, the model is provided with k demonstrations from the training data specified via prompts.The prompt examples for each task are presented in Table 6.
Method mGPT utilizes per-token cross-entropy loss, which is reduced to negative log probability due to one-hot encoding of the tokens.We select the target label associated with the prompt that results in the lowest sum of negative log probabilities for its tokens.The few-shot experiments are run five times with different random seeds, while the zero-shot experiments are run only once since the model loss is determined.
Baselines The XGLM 1.7B and XGLM 7.5B models are used as the baselines in the classification experiments.We reproduce the XGLM evaluation based on the methodology by Lin et al. ( 2022) and use the model weights and code available in the fairseq 8 library (Ott et al., 2019).We select prompts according to the templates reported by Lin et al.. Prompts for non-English languages are automatically translated with Google Translate.
Results Table 7 presents the classification results averaged across languages.The "✗" tag marks kshot settings not reported by Lin et al..We do not perform them for reproducibility purposes and fair comparison.The results by Lin et al. are reproduced in the zero-shot setup, and some scores are even slightly higher.However, not all results are reproduced, e.g., PAWSX and XNLI.We attribute this to potential differences in the translated prompts.
Overall, we observe that mGPT 1.3B is comparable with XGLM 1.7B while having fewer weights and is pretrained in twice as many languages.mGPT 13B performs better than XGLM 7.5B in zeroshot setting on all tasks except XNLI.At the same time, it lags behind in a few-shot setting being better than XGLM 7.5B only in XNLI and PAWSX tasks.Comparing the performance across languages, we find that English receives the highest accuracy for all tasks.The mGPT 1.3B and mGPT 13B models show high accuracy for the Austronesian, Dravidian, Japonic, Germanic, and Romance language families.Only the Afro-Asiatic family gets low accuracy.The mGPT models perform better than the XGLM counterparts for Austronesian, Koreanic, and Romance languages.
Our results on hate speech detection are consistent with Lin et al..The performance is slightly better across the five languages but still close to random guessing (see Table 8).The manual analysis shows that the behavior is sensitive to the input prompts, most notably for Polish.Increasing the number of demonstrations can lead to performance degradation on some classification tasks for both mGPT and XGLM.
Sequence Labeling
Tasks The sequence labeling tasks include named entity recognition (NER) and part-of-speech tagging (POS) from the XGLUE benchmark (Liang et al., 2020).To address other medium-resource and resource-lean languages, we use the Universal Dependencies treebanks (UD; Nivre et al., 2016) to evaluate POS-tagging in Armenian, Belarusian, Buryat, Kazakh, Tatar, Ukrainian, and Yakut.Method We use a modified approach to the sequence labeling tasks compared to §4.2.1.Given a sentence of n words, we iteratively predict the label for each word x i using the preceding words x <i and their predicted labels l <i as the context using a template "x <i l <i ", where i is the current token index and " " is a placeholder.The only exception is the first token x i used as the context.The placeholder is filled with each possible target label l ∈ L at each step.We select the label with the lowest sum of losses per token in the resulting string.The experiments are run in the zero-shot and 4-shot settings9 .
Example Consider an example for the POStagging task "I [PRON] WANT [VERB] IT [PART] .
[PUNCT]", which requires 4 procedure steps.First, we combine the placeholder in the string "I " with each possible POS tag and select the most probable candidate.Next, we repeat the procedure for "I l i WANT " and so on.
Baselines We use results reported in Liang et al. as the baselines: M-BERT, XLM-R, and Unicoder (Huang et al., 2019).Note that the baselines Model XGLUE CIS & Low-Resource UD ar bg de el en es fr hi it nl pl pt ru th tr ur vi zh Avg.be bxr hy kk sah tt uk Random 6.5 6.5 6.0 5.2 4.4 5.7 5.5 6.7 6.6 6.6 5.9 4.7 6.0 6.4 6.8 1.2 7.0 7.1 5.8 1.3 5.7 5.9 2. Table 10: Accuracy scores (%) for XGLUE and Universal Dependencies POS-tagging by language.mGPT models are evaluated in the 4-shot setting.The best score is put in bold, the second best is underlined.
NER Results
Table 9 shows counterintuitively that mGPT 1.3B outperforms mGPT 13B on all languages.4-shot falls behind finetuned models but significantly outperforms random guessing for both mGPT models.Per-language language analysis shows a large gap between English and other languages (for mGPT 13B the F1-score on English is more than twice higher than for any of the other languages), while for German, both models perform the worst.This pattern coincides with the baseline results.In addition, it could be noted that while for mGPT 1.3B the F1-score exceeds the 10 percent threshold for all languages, this is not the case for mGPT 13B .
POS-tagging Results POS-tagging results for XGLUE benchmark and resource-lean languages are presented in Table 10.Similarly to the NER task, mGPT 1.3B outperforms mGPT 13B practically in all languages except for Italian.On average mGPT 1.3B achieves accuracy score of 0.24 while mGPT 13B only scores 0.21.These results are still far behind fine-tuned models; however, they are 10 We evaluate the sequence labeling tasks using the XGLUE code: github.com/microsoft/XGLUE. significantly higher than random guessing.Analyzing the results for the low-resource languages, it can be seen that mGPT 1.3B performance is comparable with its performance on XGLUE, while the mGPT 13B scores are lower.
Knowledge Probing
Method We probe our models for factual knowledge in 23 languages using the mLAMA dataset (Kassner et al., 2021).The task is to complete a knowledge triplet ¡subject, relation, object¿ converted to templates for querying LMs.Consider an example from the original LAMA (Petroni et al., 2019) for English, where ¡Dante, born-in, X¿ is converted to the template "Dante was born in [MASK]".We follow Lin et al. to design the probing task.As each such query contains hundreds of negative candidates on average, we limit the number of candidates to three, i.e., one is the ground truth candidate and the other two candidates are randomly sampled from the provided knowledge source.The probing performance is evaluated with precision@1 averaged over all relations per language.
Results Figure 5 outlines the results for mGPT 1.3B and mGPT 13B .The overall pattern is that the performance is equal to or above 0.6 for Germanic, Romance, Austro-Asiatic, Japonic, and Chinese languages.However, Uralic, Slavic, Ko- reanic, and Afro-Asiatic languages receive scores of lower than 0.5.We also find that scaling the number of model parameters usually boosts the performance for high-resource languages up to 5 points, while no significant improvements are observed in the other languages.Comparing our results with Lin et al., we conclude that our models achieve lower performance than XGLM 7.5B almost in all languages and perform on par with GPT3-Curie 6.5B .
External Evaluation
General Language Understanding Scao et al. ( 2023) compared the performance of BLOOM 176B , mGPT 1.3B , OPT 175B (Zhang et al., 2022), GPT-J 6B (Wang and Komatsuzaki, 2021), and T0 11B (Victor et al., 2022) on subset of tasks from the SuperGLUE benchmark (Wang et al., 2019) in the zero-shot and one-shot settings.The results of evaluating the models using five prompts are presented in Figure 6.The mGPT 1.3B model has comparable performance despite having fewer weights.In the zero-shot setting, the performance of mGPT 1.3B , BLOOM 176B , OPT 175B , and GPT-J 6B on the considered tasks is above random guessing.We also observe the strong performance of mGPT 1.3B on the Winogender Schema Diagnostics (Ax-g).In the one-shot setting, mGPT 1.3B performs on par with GPT-J 6B , and the resulting variability is significantly reduced across all prompts.
Multilingual Clause-level Morphology The first shared task on Multilingual Clause-level Morphology (Goldman et al., 2022) covers nine languages and includes three sub-tasks: (i) inflection (generating a word form given a lexeme and a set of morphosyntactic features), (ii) reinflection (reinflect an input sentence according to a given set of morphosyntactic features), and (iii) detect a root and its features in an input sentence.Acikgoz et al. ( 2022) develop a first-place solution based on mGPT 1.3B and prefix-tuning method, outperforming other solutions and baselines on the third task.
Generation Evaluation
Method We compute seven lexical diversity metrics from Gehrmann et al. ( 2021) using the mGPT outputs 11 on 100 test set samples from the story generation task in five languages: English, French, German, Spanish, and Chinese (Chen et al., 2022).The diversity metrics include the Shannon Entropy over unigrams (Entropy 1 ), the mean segmented type-token ratio over segment lengths of 100 (MSTTR), the ratio of distinct unigrams over the total number of unigrams (Distinct 1 ), and the counter of unigrams that appear once in the collection of generated outputs (Unique 1 ).
Results
The results are presented in Table 11.
The diversity metrics scores for Chinese are the highest, while the mean generated text length is the shortest.This is likely due to its logographic writing.The results for the Indo-European languages are similar (French, German, and Spanish), indicating that mGPT 1.3B generates diverse texts in these languages.Surprisingly, the metrics are lower for English, with the average text length being longer.
Our current natural language generation evaluation approach lacks downstream tasks, which we leave for future work.
Discussion
Our key takeaways on pretraining and evaluating large-scale multilingual autoregressive LMs are summarized below.
Model Scaling Empirical Results
The language modeling results for mGPT 1.3B and mGPT 13B suggest that the model scaling improves its generation abilities for all given languages (see §4.1).However, it does not improve performance on the downstream and probing tasks (see §4.2; §4.3).Overall, the language modeling performance depends on the model size and the pretraining corpus size in a language, and smaller models may better encode linguistic information than larger ones.These findings align with Scao et al. ( 2023).
Takeaways Our work had been conducted a year before the Chinchilla scaling laws were introduced (Hoffmann et al., 2022).According to the advanced methods of scaling LMs, our pretraining corpus can be sufficiently extended to improve the generalization abilities of the mGPT 13B model.At the same time, the pretraining corpus design 11 We use the generation hyperparameters: temperature = 1, max length = 100, top k = 5, top p = 0.9.can promote the model underfitting and overfitting on particular languages.We believe it can be accounted for by aggregating the language-specific cross-entropy loss and producing language weights similar to Xie et al. (2023).
Lack of Data
Empirical Results Another challenging factor is the lack of high-quality data for the low-resource languages.Although mGPT shows promising results on the language modeling and sequence labeling tasks for the underrepresented languages (see §4.1, §4.2), the low amount of evaluation resources limits the scope of analyzing the model generalization abilities.The correlation between the model performance and the amount of pretraining data in a language (see §4.1, and, e.g., Lauscher et al., 2020;Ahuja et al., 2022) further highlights the need for creating text corpora in such languages.
Takeaways The question of addressing the discrepancy in data distribution across the world's languages remains unresolved.Our data collection and filtration approach is equivalent for all considered languages.Extending the language-agnostic heuristics is restrained due to the lack of linguistic expertise.However, we assume that experimenting with the training data for the text quality classifiers can improve the resulting quality of the corpora for the low-resource languages (e.g., training the classifiers on different mixtures of data in the medium and high-resource languages).
As the follow-up work, we release 23 versions of the mGPT 1.3B model continuously pretrained with language modeling objective on monolingual corpora for medium-resource and low-resource languages collected through collaboration with the NLP community.Table 12 summarizes the models by language and the language modeling performance on the held-out monolingual test sets.Examples of the corpora include Eastern Armenian National Corpus (Khurshudyan et al., 2022), OpenSubtitles (Lison and Tiedemann, 2016), and TED talks.Continued pretraining on additional data improves the language modeling performance.
Language Selection
Empirical Results Results of mGPT 1.3B on most of the classification tasks are on par or better than the results of the XGLM 1.7B given that mGPT covers twice as many languages (see §4.2).However, mGPT underperforms the baselines on several multi-class classification and probing tasks.
Takeaways We find that balancing the pretraining corpus by the language family helps improve the language modeling abilities for underrepresented languages due to their typological similarity with the medium and high-resource languages (see §4.1).However, increasing language diversity can lead to performance degradation because of the curse of multilinguality and a limited model capacity (Conneau et al., 2020).
Tokenization Empirical results
We conduct an ablation study to analyze the impact of the tokenization strategy on language modeling performance.We find that the considered strategies do not improve the model's perplexity.However, the main drawback of the perplexity-based evaluation is that it only partially assesses the model generalization abilities.
Takeaways The optimal tokenization method and vocabulary size remain an open question, particularly in the multilingual setup (Mielke et al., 2021).There are no established methods for defining the vocabulary size based on the amount of textual data in different languages.Our experiments are limited to a fixed vocabulary size, and we leave further investigation of the tokenization strategies and their configurations for future work.
Empirical results
• Increasing the number of demonstrations does not always lead to improvements but decreases the performance on some downstream tasks (see §4.2.1; §4.2.2).This observation aligns with Lin et al. ( 2022) and Brown et al. (2020).
• The zero-shot and few-shot performance may not exceed the random guessing on particular tasks, which points to the failure of a model to follow the guidance in the demonstration examples (see §4.2.1; §4.2.2).
• The prompting approach is unstable and hardly universal across languages, as indicated by the model sensitivity to the prompts.
• The mGPT models can assign higher probabilities to the most frequent tag in the input for the sequence labeling tasks (see §4.2.2).
Takeaways
• The stability of the models with respect to the prompts may be improved using prompttuning (Liu et al., 2023b) and contextual calibration (Zhao et al., 2021) as shown in §4.4.
• The generalization capabilities of the autoregressive LMs in sequence labeling tasks is an underexplored area.While our LMs achieve results higher than random guessing, the low performance can be attributed to the probability distribution shifts between the pretraining corpora and the prompts.We leave the investigation of the alternative prompt design (Liu et al., 2023a) and structured prediction methods (Liu et al., 2022) for future work.
Conclusion
We introduce the mGPT 1.3B and mGPT 13B models, which cover 61 languages from linguistically diverse 25 language families.Our model is one of the first autoregressive LMs for economically endangered and underrepresented CIS and low-resource languages.The architecture design choices are based on the preliminary tokenization experiments and their perplexity-based evaluation.The model evaluation experiments include language modeling, standardized cross-lingual NLU datasets and benchmarks, world knowledge probing, and social bias tasks.We evaluate the in-context learning abilities in zero and few-shot settings with a negative log-likelihood probability.We present a detailed analysis of the model performance, limitations, and ethical considerations.Despite the space for further quality growth and solving the highlighted limitations, the model shows significant potential and can become the basis for developing generative pipelines for languages other than English, especially the low-resource ones.This initiative has been developed for 23 diverse languages through collaboration with the NLP community.We hope to benefit cross-lingual knowledge transfer, annotation projection, and other potential applications for economically challenged and underrepresented languages and diversify the research field by shifting from the Anglo-centric paradigm.
7 Ethical Statement and Social Impacts To the best of our knowledge, we present one of the first attempts to address this problem for 20 languages of the Commonwealth of Independent States and the small peoples in Russia.
Energy Efficiency and Usage
Pretraining large-scale LMs requires many computational resources, which is energy-intensive and expensive.To address this issue, we used the sparse attention approach suggested by Brown et al. (2020) and reduced the computational resources required to achieve the desired performance.The CO2 emission of pretraining the mGPT models is computed as Equation 2 (Strubell et al., 2019): The power usage effectiveness (P U E) of our data centers is not more than 1.3, the spent power is 30.6kkWh (mGPT 1.3B ) and 91.3 kWh (mGPT 13B ), and the CO2 energy intensity (I CO2 ) in the region is 400 grams per kWh.The resulting CO2 emission is 15.9k kg (mGPT 1.3B ) and 47.5k kg (mGPT 13B ).The emission is comparable with a single mediumrange flight of a modern aircraft, which usually releases about 12k kg of CO2 per 1k km.Despite the costs, mGPT can be efficiently adapted to the user needs via few-shot learning, bringing down potential budget costs in the scope of applications in multiple languages, such as generating the content, augmenting labeled data, or summarizing news.
The multilingual pretraining saves on data annotation and energy consumption, alleviating the carbon footprint.Model compression techniques, e.g., pruning and distillation, can reduce inference costs.
Social Risks of Harm
Stereotypes and unjust discrimination present in pretraining corpora lead to representation biases in LMs.LMs can reflect historical prejudices against disadvantaged social groups and reproduce harmful stereotypes about gender, race, religion, or sexual orientation (Weidinger et al., 2022).We have analyzed the mGPT's limitations on social risks of harm involving hate speech on the hate speech detection task.Our results are similar to Lin et al. (2022) in that the performance is close to random guessing.This may indicate a significant bias in the pretraining corpus, a mutual influence of languages during training, or methodological problems in the test set.We do not claim that our evaluation setup is exhaustive, and we assume that other biases can be revealed through a direct model application or an extended evaluation.
Potential Misuse
The misuse potential of LMs increases with their ability to generate high-quality texts.Malicious users can perform a socially harmful activity that involves generating texts, e.g., spreading propaganda and other targeted manipulation (Jawahar et al., 2020).We recognize that our models can be misused in all supported languages.However, adversarial defense and artificial text detection models can mitigate ethical and social risks of harm.Our primary purpose is to propose multilingual GPT-style LMs for research and development needs, and we hope to work on the misuse problem with other developers and experts in mitigation research in the future.
Figure 1 :
Figure 1: Number of tokens for each language in the pretraining corpus on a logarithmic scale.
Figure 2 :
Figure 2: Number of documents for each language in the pretraining corpus on a logarithmic scale.
Figure 4 :
Figure 4: Family-wise perplexity results.The scores are averaged over the number of languages within each family.
Figure 5 :
Figure 5: Knowledge probing results for 23 languages.The performance of a random baseline is 0.33.
Figure 6 :
Figure 6: The SuperGLUE evaluation results in the zero-shot and one-shot settings (Scao et al., 2023).
7. 1
Low-resource LanguagesNLP for resource-lean scenarios is one of the leading research directions nowadays.The topic's relevance has led to proactive research on low-resource languages.Our work falls under this scope, introducing the first autoregressive LM for 61 languages.
Table 4 :
Comparison of GPT-2 and GPT-3.The mGPT architecture replicates the parameters of GPT-3 1.3B and GPT-3 13B , and uses sparse attention in alternating dense and sparse layers.
Table 6 :
Prompt examples for each downstream task.The examples are in English for illustration purposes.
Table 8 :
Accuracy scores (%) on hate speech detection by language.The best score is put in bold, the second best is underlined.
Table 9 :
F1-scores for NER by language.The mGPT models are evaluated in the 4-shot setting.The best score is put in bold, the second best is underlined.
Table 11 :
).The results for lexical diversity of generated texts on the GEM story generation task.
Table 12: A list of the mGPT 1.3B models continuously pretrained on monolingual corpora for 23 languages. | 7,730.8 | 2022-04-15T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Multi-Ray Modeling of Ultrasonic Sensors and Application for Micro-UAV Localization in Indoor Environments
Due to its payload, size and computational limits, localizing a micro air vehicle (MAV) using only its onboard sensors in an indoor environment is a challenging problem in practice. This paper introduces an indoor localization approach that relies on only the inertial measurement unit (IMU) and four ultrasonic sensors. Specifically, a novel multi-ray ultrasonic sensor model is proposed to provide a rapid and accurate approximation of the complex beam pattern of the ultrasonic sensors. A fast algorithm for calculating the Jacobian matrix of the measurement function is presented, and then an extended Kalman filter (EKF) is used to fuse the information from the ultrasonic sensors and the IMU. A test based on a MaxSonar MB1222 sensor demonstrates the accuracy of the model, and a simulation and experiment based on the ThalesII MAV platform are conducted. The results indicate good localization performance and robustness against measurement noises.
Introduction
Micro air vehicles (MAVs) are a type of drone and are approximately the size of a person's hand. This property makes them easy to pack and allows them to be flown indoors. One of the fundamental problems of autonomous indoor flight is the localization ability. This problem has become more severe due to the strict restrictions on the size and weight of MAVs. Thus, how to utilize low-cost and lightweight sensor resources to locate MAVs in complex and ever-changing indoor environments is a hot and challenging issue.
Many indoor localization technologies have been developed to achieve indoor localization, such as localization based on ranging sensors [1][2][3], Bluetooth [4], inertial measurement units (IMUs), cameras, ultra wide band (UWB) [5], wireless local area network (WLAN) [6], ZigBee [7] and radio frequency sensors [8]. In this paper, the above approaches can be divided into two types according to whether the main localization sensors are placed on the unmanned aerial vehicle (UAV): onboard-sensor-based approaches and offboard-sensor-based approaches. The offboard-sensor-based approaches, such as Cricket developed by MIT, require some equipment, such as the beacons or motion capture cameras, to be prearranged in the UAV's flight environment; thus, such approaches have good positioning accuracy in known environments.
The onboard-sensor-based approaches, which do not require the assistance of external devices, can be applied to unknown environments. In [9], the data from the IMUs and lidar are used as inputs to the odometer, and the position of the UAV and the map are given simultaneously. In [10], a landmark-based method is introduced. In this method, some simply shaped objects, such as walls, corners and edges, are chosen as landmarks. Additionally, 16 ultrasonic sensors are mounted around
The Micro-UAV Platform
The Thales I I indoor MAV platform, shown in Figure 1, is the second generation of the Thales series created by the our group [19]. The MAV has the advantages of small size and light weight, and it can fly for about 4 min with a 400 mA battery.The weight of the Thales I I platform is approximately 75 g, which consists of the airframe (15 g), the battery (12 g), 4 motors and propellers (24 g) and 4 MB1222 sonar range finders (24 g), and its diagonal length is 135 mm (motor to motor). The system architecture of the Thales I I MAV platform is shown in Figure 2, the lower part of the architecture shows the main hardware components, and it is a modified version based on the open source hardware Pixhawk [20]. The powerful ARM STM32F427 is used to perform the calculation and the ESP8285 WiFi module is used to communicate with the mobile controller. Four 820-hollow-cup-motors are used to drive the 55 mm propellers. The angular velocity and movement acceleration are measured by an MPU6000 IMU sensor, and the heading angle is provided by an LSM303 magnetic sensor; both sensors have a sampling period of 8 ms. Considering the size and load limitations, some widely used precise distance measurement approaches, such as the laser range finder and the depth camera, cannot be applied in the MAV platform. In the Thales I I platform, four MB1222 I2CXL-MaxSonar-EZ2 range finders are installed on the bottom of the MAV. They are installed perpendicular to each other, as shown in Figure 3. Thus, the ranges of four directions can be provided in a single measurement.
The features of the MB1222 I2CXL-MaxSonar-EZ2 range finder include centimeter resolution, an excellent compromise between sensitivity and side object rejection, short to long distance detection, range information from 20 cm to 765 cm, up to a 40 Hz read rate, and an I2C interface [21]. Thus, this sensor is one of the best choices for the localization task. The other features of the MAV platform are shown in Table 1.
The operating system running on the flight control board is the open source software PX4. It is easy to develop customized tasks, and all the data during the flight period are easy to store. The main functions of the proposed localization algorithm are shown as the upper part in Figure 2.
Modeling of the Ultrasonic Sensors
Ultrasonic sensors are based on the time of flight to measure distance and return a range. However, this range is not the straight line distance to an obstacle; rather, it is the distance to the point that has the strongest reflection. This point could be anywhere along the perimeter of the sensor's beam pattern [17,22], which makes the modeling of ultrasonic sensors a complex issue, particularly for online computing. Figure 4 shows the detection area of the MaxSonar MB1222 sonar sensor; it is obtained by placing and measuring a plastic plate at predefined grid points in front of the ultrasonic sensor.
As shown in Figure 4, the 2D beam pattern of the MB1222 sensor was approximated as an irregular polygon. To reduce the computational load of the polygon model, a multi-ray model is proposed, and the beam pattern is approximated by a ray group that starts from the origin, as shown in Figure 5. Then, the ultrasonic 2D multi-ray model S can be formulated as a ray group as where s 0 represents the sonar sensor's position and s j is the end point of the j-th ray. Thus, for a known obstacle O, the model output l is obtained through a two-step calculation. First, a set of all the intersections of O and S is calculated as and then l is given as Equation (3) follows the principle that the ultrasonic sensor provides the nearest measurement of all detections, and a predefined value l max is given if there is no intersection between S and R.
Based on the beam pattern of the MaxSonar MB1222 sonar sensor, the multi-ray model was given as shown in Figure 6. Nine rays were used to approximate the detection zone of MB1222. Note that the far ends of the rays were selected slightly beyond the edge to obtain better coverage of the detection zone. To test the fitness of the multi-ray model and the actual sensor measurement, a comparative test was performed between the proposed model and the MB1222 sensor, as shown in Figure 7. The sensor was placed on the edge of a semicircle with radius r, pointing to the center of the circle, and the angle ψ was then increased in five-degree steps. The actual measurement l t is shown in Table 2. The corresponding output of the multi-ray model l m is presented in Table 3. The modeling error l e is presented in Table 4.
As shown in Table 2, (1) The measurement had a constant offset of approximately 3 cm to 4 cm, even in ψ = 0, i.e., the sensor is perpendicular to the wall.
(2) The maximum detection angles varied with the distances to the wall. The farther the sensor was from the wall, the narrower the detection angle. The half-side detection angle was close to 0 when the distance exceeded 5.9 m, and it reached approximately 35 degrees when r was less than 1. For comparison, the 3 cm offset was subtracted from the output of the model, and the model error was defined as l e = l t − l r − 3 cm, as shown in Tables 3 and 4. As shown, in most cases, the model error was less than 1 cm, and the maximum model error was 2 cm. Considering that the minimum resolution of the sensor was 1 cm, the proposed model had good fitness with the actual sensor for indoor localization. 0 5 10 15 20 25 30 35 40 30 27 27 27 26 26 25 24 24 -60 57 57 56 55 54 53 50 49 -90 86 86 85 84 82 80 78 78 -120 116 115 114 113 110 109 106 106 -150 146 145 144 142 140 139 136 --250 247 245 244 245 243 ----350 346 345 344 ------450 447 446 444 mark "-" means that the sensor returned its maximum result, i.e., the reflection intensity did not reach the threshold of the sensor.
Note that obvious angular constraint characteristics were observed in the measurements of ultrasonic sensors; however, we did not introduce the angular constraint in the proposed model, which was a consideration for reducing the calculation load. Because the constraint involves calculating the angles between all line segments of S and O, it may lead to a significant increase in the calculation load. In an alternative approach, the jump filter, was used to solve this problem, which will be presented in Section 5.
Modeling of the MAV System
To describe the motion of the MAV, the map coordinate system O m−x m ,y m ,z m and the body coordinate system O b−x b ,y b ,z b were introduced. The map coordinate system O m−x m ,y m ,z m was fixed to the earth, and its origin is located at the starting corner m 1 of the map M. The body coordinate system O b−x b ,y b ,z b was fixed to the MAV (in strapdown configuration), as shown in Figure 8.
where [φ, θ, ψ] are the roll, pitch and yaw angles, respectively. Then, the accelerations on the body frame can be transferred to the map frame by where G = [0, 0, g] is the gravity vector in the map frame. Therefore, the discrete-time state-space model of the MAV is given by where t imu represents the sampling period of the IMU and v(k) = [v x (k), v y (k)] and p(k) = [p x (k), p y (k)] are the velocity vector and position vector in the map frame at step k, respectively. The output of the MAV system was the measurement of multiple sonar sensors, which is defined as where l = [l 1 , l 2 , l 3 , l 4 ] is the measurement vector of sonar sensors, and h() is a nonlinear function of p(k), ψ(k), the sonar model S and the map of the working area M. To obtain the measurements of the sonar sensors, one needs to represent the sonar's model S in the map coordinate system. Since S is a set of line segments, this transformation can be achieved by representing the endpoints of line segments as where p and ψ denote the UAV's position and heading angle in the map coordinate system, respectively. ψ s 0 is the heading angle of sonar in the body coordinate system, and d 0 is the length between the origins of the body frame and of the sonar frame. Additionally, d j and ψ s j are the length and the angle of the jth ray in the sonar coordinate system, respectively. Then, the ultrasonic sensor's measurement l is given by Equations (3) and (11).
In particular, among all the intersections, the one that minimizes Equation (3) is defined as the "active intersection" r a , and terms "active ray" s a and "active wall" M a are introduced to represent the corresponding ray and the corresponding wall with the active intersection. (3), the measurement function of the system is a nonlinear and discontinuous function; thus, using the EKF rather than the traditional Kalman filter is a feasible way to estimate the location of the MAV. The key issue is to solve the Jacobian matrix of Equation (3).
As shown in Equation
The gradient matrix of the function h with respect to x at step k is given by Based on the multi-ray model, the Jacobian matrix can be calculated by geometric methods. At time k, suppose that the relationship between the sonar model and the map is as shown in Figure 9. Additionally, assume that the active ray S a and the active ray M a remain unchanged. The Jacobian matrix can then be given as where ψ a S i and ψ a M i represent the yaw angles of the "active ray" and the "active wall" of the i th ultrasonic sensor in the map frame. In addition, ∂l i ∂p x and ∂l i ∂p y were set to zeros if there was no obstacle in the detection range of the i th ultrasonic sensor. Then, the MAV's position can be obtained through a standard EKF procedure as x(k|k) =x(k|k − 1)
P(k|k) = [I − K(k)H(k)]P(k|k − 1)
. (16) Note that Equation (3) is a piecewise continuous function, and its output may jump in some conditions, such as if S a changes, M a changes or S a and M a change simultaneously. In addition, as mentioned in Section 3, if the angle between S a and M a exceeds the detection angle constraint, it may also lead to a significant deviation between l(k) andl(k). Similar results can also occur when the sensor occasionally malfunctions. Considering that the above cases will lead to a significant change in the term l(k) −l(k), a jump filter is given to solve this problem as where is a predesigned threshold. Therefore, if the measurement l i (k) is significantly different from its predictionl i (k), i.e., |l i (k) −l i (k)| ≤ , the corresponding measurement will be filtered out from the estimation. The flow chart of the indoor localization algorithm is shown in Figure 10.
Experiment
We thoroughly evaluate the proposed positioning algorithm using both a simulation and actual implementation.
Simulation Result
The localization algorithm developed in this paper was first tested through a simulation. To perform the simulation, a polygon a priori map is given as shown in Figure 11, and the sampled data of the accelerometer and the magnetic heading sensor are formed as whereā b andψ are the true acceleration and the true heading angle of the MAV, and N (0, V a ) and N (0, V ψ ) are the corresponding Gaussian noises with variances of V a and V ψ .
For a MAV in this map, since the position, the heading angle, the map and the ultrasonic model are known, the ultrasonic theoretical measurementl is known. We also add a Gaussian noise with variance V l to it as The other parameters used in the simulation are presented in Table 5. The simulation results are shown in Figures 11-15. The actual trajectory of the MAV is shown by the solid line in Figure 11. The MAV first flew straight to the northeast and then straight north, and finally executed a turning maneuver. The true values of the IMU shown in Figure 12 illustrate that the MAV experienced many acceleration and deceleration events during the flight, and its heading angle also changed significantly with time. The localization results based on the integral of IMU sensors and based on the proposed EKF approach are shown in Figure 11. The IMU position error increases over time due to the drift of the accelerometer, and the localization accuracy is poor. In contrast, the estimated locations of the EKF approach are very close to the actual trajectory. A quantitative error comparison is presented in Figure 13. The localization error of the proposed method is less than 0.25 m, while the IMU localization error increases cumulatively and finally approaches 2.8 m.
The measurements and multi-ray model estimations of the four sonar sensors are presented in Figure 14. The ultrasonic measurements have undergone multiple mutations over time; meanwhile, the mutation of the model estimations were not synchronized with the measurements due to localization errors, some differences even reached four meters, such as l 3 in 4.64 s. The activation of the jump filter is shown in Figure 15. In this case, errors of more than 0.3 m will be filtered out, and the threshold is selected based on the maximum possible cumulative error of the IMU during one sampling period of the sonar sensor. As shown in Figure 13, the difference between the estimations and measurements does not significantly affect the localization because of the correction of the jump filter. The statistical analysis of the localization error of EKF approach is shown in Figures 16 and 17. Figure 16 shows the distribution of the Euclidean norm of EKF localization errors. The mean EKF localization error was 0.062 m and its variance was 0.003 m 2 . The red line denotes a smoothing function fit of the error. The main components of the data are concentrated between 0 and 0.1 m, which is very close to a Rayleigh distribution. A small amount of data was distributed between 0.1 and 0.22 m, and this is due to the cumulative error caused by the asynchronous between the measurements and estimations. Figure 17 shows the distribution of the localization error vector, most of the data were less then the mean error, while a few data were close to 0.25 m.
Experimental Results
The proposed algorithm was implemented as an application of PX4 autopilot software. It acquired data from the IMU sensors every 8 mm and from four sonar measurements every 160 mm, and it reported the position of the MAV to the other applications. The Thales I I MAV platform was running the upgraded PX4 autopilot software.
In Figure 18 the red Gaussian describes the distributions of the acceleration values along x b and y b axes of the Thales I I MAV. The bias mean errors on x b and y b axes were 0.053 m/s 2 and 0.27 m/s 2 , respectively, and the variations were 0.17 (m/s 2 ) 2 and 0.21 (m/s 2 ) 2 , respectively. That shows the IMU sensors were not very accurate and may lead significant cumulative errors over time.
An L-shaped experimental site was constructed using foam boards, as shown in Figure 19. Because we do not have a more accurate localization system, we used a preset path to validate the proposed approach. The test process is to first set a preset trajectory, then move the MAV as close as possible to the preset trajectory, and finally compare the positioning result with the preset trajectory. Note that the second step is achieved by manual operations; thus, it may lead to deviations between MAV's actual position and the preset trajectory. As shown in Figure 20, the dotted line denotes the preset path, and it starts from the point (0.5, 0.55) and passes through two turns to reach the right end point (2.25, 4.75). A ±10 cm error band is also given by two dash lines, which is formed by two lines that are parallel to the preset path and each line is 10 cm away from the preset path. As shown in the figure, most of the localization outputs were within the error band which indicates that localization error does not exceed 20 cm. Considering the accuracy of human execution, the proposed approach can solve the indoor localization problem well. Figure 21 presents the measurements of the four MB1222 ultrasonic sensors. Note that the measurement data are stored as the localization application starts to run; thus, the recording time does not start from 0. As shown in Figure 21, the measurement may contain several jumps in the values when the ultrasonic reflected beam changes from one wall to the other. For example, the measurement of sonar no. 4, which points to the right side, jumped from 0.57 m to 7.65 m at approximately 36 s; this indicates that the MAV had just passed the first corner. In practice, the items in a room may change, which may adversely affect the localization algorithm. To test the adaptability of the algorithm to this situation, an unmodeled obstacle was placed in the test site. The obstacle was a box that was approximately 0.7 m long and 0.5 m wide. The test results are shown in Figures 22 and 23. The proposed algorithm worked well with the unmodeled obstacle, as the localization results have not been significantly affected and stay within the error band.
Conclusions
In this paper, a novel beaconless indoor localization approach that relies on onboard ultrasonic sensors and IMU sensors is presented.
A multi-ray model for ultrasonic sensors is proposed. It approximates a beam pattern accurately while maintaining a low computational complexity, which makes it suitable to be applied to a light MAV. Then, a multi-ray modeling process has been provided based on the beam pattern of the MaxSonar MB1222 ultrasonic sensor. The comparative test validates that the proposed model has good fitness with the actual sensor for indoor localization.
Based on the multi-ray model, an EKF-based indoor localization method has been presented. The measurements of sonar sensors and IMU sensors are fuzed to achieve higher precision positioning. The jump filter is introduced to suppress the abnormal and significant difference between the estimations and measurements.
Simulations are presented to validate the proposed methods, and the results show it has a localization accuracy of approximately 20 cm. Afterwards, the proposed approach are applied to the Thales I I MAV, which is a small size and light weight platform. The results illustrate that its computational complexity is simple enough to run on the stm32 platform and positioning accuracy is also higher than 20 cm. An experimental test with an unmodeled obstacle shows the good robustness of proposed method, the localization results have not been significantly affected and stays within the error band.
Future work is to improve the algorithm for more complex indoor environments such as offices with many electric and electronic equipments, that may lead a large interference to the measurement of the magnetic compass.
Author Contributions: F.X. conducted the ultrasonic sensor modeling. Y.L. and S.X. contributed the localization method. Z.J. contributed the simulation and application. Y.L. and Z.J. wrote the paper. S.X. and F.X. revised the paper. | 5,275.2 | 2019-04-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Gross-Neveu-Wilson model and correlated symmetry-protected topological phases
We show that a Wilson-type discretization of the Gross-Neveu model, a fermionic N-flavor quantum field theory displaying asymptotic freedom and chiral symmetry breaking, can serve as a playground to explore correlated symmetry-protected phases of matter using techniques borrowed from high-energy physics. A large- N study, both in the Hamiltonian and Euclidean formalisms, yields a phase diagram with trivial, topological, and symmetry-broken phases separated by critical lines that meet at a tri-critical point. We benchmark these predictions using tools from condensed matter and quantum information science, which show that the large-N method captures the essence of the phase diagram even at N = 1. Moreover, we describe a cold-atom scheme for the quantum simulation of this lattice model, which would allow to explore the single-flavor phase diagram.
I. INTRODUCTION
The understanding and classification of all possible phases of matter is one of the most important challenges of contemporary condensed-matter physics [1], and high-energy physics [2], finding also important implications in quantum information science [3]. Such a complex quest can benefit enormously from the complementary perspectives and tools developed by these different communities, calling for a crossdisciplinary dialogue that can lead to a very interesting collaborative approach. The theory of spontaneous symmetry breaking [4] and critical phenomena [5] are representative examples, where such an open dialogue has provided fundamental insight to unveil generic and universal properties in the classification of various phases of matter, and the transitions between them. However, these examples do not exhaust all possible phenomena [6], encouraging further efforts to provide a general classification encompassing other exotic orders.
Some of these studies were initially stimulated by the community working on quantum chaos [7], which looked for a complete classification of various random matrix ensembles depending on the symmetries, leading to the so-called tenfold way [8]. The ten-fold way turned out to be a fundamental tool for the classification of non-interacting phases of matter [9] which, in contrast to symmetry-broken phases, can exist within the same symmetry class [10][11][12]. In this case, transitions between different phases of matter can only occur via gap-closing continuous phase transitions, but there is neither symmetry-breaking, nor any underlying local order parameter. In contrast, these new phases are characterized by a topological invariant, the value of which changes abruptly across the symmetry-preserving critical point. This leads to the notion of symmetry-protected topological (SPT) phases [13], which includes the fermionic topological insulators and superconductors, but also other SPT phases of bosons and spins.
From a quantum-information perspective, the recent progress in the so-called tensor networks [14] has triggered the interest within this community in the general question of classifying topological phases of matter for generic interacting systems [15], including static and dynamical situations [16]. Note that, despite the considerable progress, a complete classification has so far been accomplished only for (1+1)-dimensional systems [17]. At such reduced dimensionalities, there is essentially a single gapped phase, which is trivial (i.e. it can be transformed into an uncorrelated product state by local unitaries) unless additional discrete symmetries are taken into account. Such symmetries may protect arXiv:1807.03202v2 [cond-mat.quant-gas] 10 Sep 2018 the phases, such that the states belonging to different symmetry sectors cannot be transformed into one another using local symmetry-preserving operations. A detailed understanding and characterization of the properties of these SPT phases, in the presence of interactions and strong correlations, is an open question of current interest. As argued in this work, these phases are not only relevant in condensed-matter systems, but also arise in the context of high-energy physics for certain lattice formulations of quantum field theories (QFTs).
In this paper, we focus on strongly-correlated SPT phases of a paradigmatic model of high energy physics: the Gross-Neveu model [18]. This QFT describes Dirac fermions with N flavors interacting via quartic interactions in 1 spatial and 1 time dimension, and was originally introduced as a toy model that shares several fundamental features with quantum chromodynamics. We consider a Wilson-type discretization of the QFT [19], and term the lattice version as the Gross-Neveu-Wilson model. Despite extensive studies of the GN model, a detailed characterization of strongly-correlated SPT phases has not been discussed in detail, to the best of our knowledge, neither in the large, nor in the finite N limit. The present work has the ambition of filling this gap using methods of contemporary theoretical physics and numerical simulations. Moreover, we present a scheme for the experimental realization of this discretized QFT using cold-atom quantum simulators. In this way, we hope that the Gross-Neveu model will get upgraded from a toy model used to understand some essential features of more realistic high-energy QFTs, into a cornerstone in the classification of correlated topological phases of interest in condensed matter and quantum information, which can also be explored in a realistic experiment of atomic, molecular and optical physics.
We now summarize our main results, and how they are organized in this paper: In Sec. II, we discuss generalities of the Gross-Neveu-Wilson model viewed from the complementary perspectives of high-energy, condensed-matter, and coldatom physics. This section is intended to bridge the specific knowledge gaps between these different communities, in our effort to provide a self-contained cross-disciplinary study. In Sec. III, we study the occurrence of correlated SPT phases in the model using tools common to high-energy physics. We discuss the phase diagram from the large-N expansion, including both a continuous time approach (i.e. Hamiltonian field theory on the lattice), and a discretized time approach (leading to Euclidean field theory on the lattice). This detailed study has allowed us to identify important details of the Euclidean approach, which must be carefully considered in order to understand the phase diagram of the model. In particular, we provide a neat picture where trivial gapped phases and correlated SPT phases, well-known in the condensedmatter and quantum information communities, and paritybreaking Aoki phases, well-known to the lattice field theory community, coexist in a rich phase diagram. Moreover, the large-N approach is exploited to indicate the existence of tricritical points where these three different phases are joined. We benchmark the large-N predictions using tools common to the condensed-matter and quantum-information communities, i.e. tensor-network techniques based on matrix-product-state variational ansatzs. As discussed in the text, these quasiexact numerics for a N = 1 realization of the Gross-Neveu-Wilson model confirm the large-N prediction of the phase diagram, and provide additional information that complements the large-N high-energy-inspired understanding of the model. Finally, we present a proposal for a potential experimental realization of the Gross-Neveu-Wilson model with ultra-cold atoms. In this way, relativistic models of high-energy physics could be explored with table-top non-relativistic experiments at ultra-low temperatures by focusing on low-energies and long wavelengths. The Gross-Neveu model is a relativistic QFT describing N species (flavors) of a massless Dirac field, which live in a (1+1)-dimensional spacetime and interact via four-fermion terms [18]. This model originates from its higher-dimensional counterparts, the so-called Nambu-Jona-Lasinio models [20,21], which were introduced as alternatives to non-Abelian gauge theories [22]. Pre-dating quantum chromodynamics (QCD) [23], these models offer a simplified framework to study essential features of the strong interaction, such as dynamical mass generation by chiral symmetry breaking. In addition to these features, the lower-dimensional Gross-Neveu model was introduced post-QCD as a tractable QFT displaying asymptotic freedom in a renormalizable framework. In contrast to some of its higher-dimensional cousins, this feature permits to derive rigorous results concerning the renormalization group and the convergence of perturbation theory [24].
In the continuum, this model is described by the following normal-ordered Hamiltonian H = dx : H : with Here, ψ n (x), ψ n (x) = ψ † n (x)γ 0 are two-component spinor field operators for the n-th fermionic species, and γ 0 = σ z , γ 1 = iσ y are the gamma matrices, which can be expressed in terms of Pauli matrices for a (1+1)-dimensional Minkowski spacetime, leading to the chiral matrix γ 5 = γ 0 γ 1 = σ x [25]. Therefore, the Gross-Neveu model describes a collection of N copies of massless Dirac fields coupled via the quartic interactions.
The first term in Eq. (1) corresponds to the kinetic energy of the massless Dirac fermions, where we use natural units h = c = 1, whereas the second term describes two-body interactions between pairs of fermions that scatter off each other with a strength g 2 /N. This model has a global, discrete chiral symmetry ψ n (x) → γ 5 ψ n (x), ∀x, as follows directly from the anti-commutation relations of the Dirac matrices. Additionally, a global U(N) internal symmetry becomes apparent by introducing Ψ(x) = (ψ 1 (x), · · · , ψ N (x)) t , after rewriting the Gross-Neveu Hamiltonian density as which is invariant under the transformation Ψ(x) → u ⊗ I 2 Ψ(x), ∀x, with the unitary matrix u ∈ U(N). We note that the fields have classical mass dimension d ψ = 1/2, while the interaction couplings are dimensionless d g = 0.
In the limit where the number of flavors N is very large, D. J. Gross and A. Neveu showed that this model yields a renormalizable QFT displaying asymptotic freedom, i.e. the interaction strength g 2 is a relevant perturbation in the infrared (IR), but becomes weaker at high energies in the ultraviolet (UV) limit [18]. Moreover, even if the discrete chiral symmetry prevents the fermions from acquiring a mass to all orders in perturbation theory, they showed that a mass can be dynamically generated through the spontaneous breaking of this chiral symmetry, which can be captured by large-N methods. In contrast to the Higgs mechanism, where masses can be generated by introducing additional scalar fields that undergo spontaneous symmetry breaking themselves, here a physical mass (i.e. gap) is generated dynamically as a non-perturbative consequence of the four-fermion interactions. These results are exact in the N → ∞ limit, and it is possible to calculate the leading corrections for a finite, but still large, N.
A different strategy to explore such non-perturbative effects is the so-called lattice field theory (LFT), which discretizes the fermion fields on a uniform lattice Λ s = aZ N s = {x : x/a = n ∈ Z N s }, where N s is the number of lattice sites, and a is the lattice spacing [28]. A naive discretization of the derivative of the Dirac operator yields the Hamiltonian H N = a ∑ x∈Λ s :H N :, which describes a system of interacting fermions hopping between neighboring sites of a one-dimensional lattice Here, the lattice fields fulfill the desired anti-commutation algebra in the continuum limit Unfortunately, this naive discretization also leads to spurious fermion doublers which, for g 2 = 0, correspond to massless Dirac fields appearing as long-wavelength excitations around the corners of the Brillouin zone [29]. In the present case, the Brillouin zone is BZ s = {k = 2πn/N s } = (−π/a, π/a] such that, in addition to the target massless Dirac field around k = 0, a single doubler arises around the corner k = π/a [30]. Note that, as soon as the interactions are switched on g 2 > 0, there will be scattering processes where the doubler affects the properties of the massless Dirac field, such that the continuum limit may differ from the desired QFT (1). Among several possible strategies to cope with the presence of such fermion doublers [28], K. Wilson considered introducing a momentum-dependent mass term, the socalled Wilson mass, that sends all the doublers to the cutoff of the lattice field theory [19]. In this way, one expects that these heavy fermions will not influence the universal longwavelength properties of the continuum limit.
For the Hamiltonian QFT of interest (1), this can be accomplished by introducing an additional Wilson term in the naive discretization (3) leading to H W = a∑ x∈Λ s : H W :, where which will be referred to as the Gross-Neveu-Wilson (GNW) model in this work. Here, r ∈ [0, 1] is the so-called Wilson parameter. In the continuum limit, and for g 2 = 0, the mass of the doubler around k = π/a becomes m π = 2r/a, while the Dirac fermion around k = 0 remains massless m 0 = 0. We will set r = 1 henceforth, such that the doubler mass coincides with the UV energy cutoff of the QFT. On the other hand, the Dirac field around k = 0 remains massless, and one expects that the IR limit will be governed by the desired chiral-invariant QFT. This situation gets more involved when the interactions are switched on g 2 > 0, as the additional Wilson terms (4) break explicitly the discrete chiral symmetry (i.e. rΨ(x)Ψ(x) → −rΨ(x)Ψ(x) under the discrete chiral transformation since γ 5 γ 0 γ 5 = −γ 0 ). Accordingly, the vanishing mass m 0 = 0 of the Dirac fermion around k = 0 is no longer protected by the discrete chiral symmetry, and it can become finite even for perturbative interactions in contrast to the continuum model. Since one is interested in recovering the QFT (1) for massless Dirac fermions, it is thus necessary to approach the continuum limit using a different strategy. The idea is to introduce an additional mass term in the lattice Hamiltonian (4) leading toH W = a∑ x∈Λ s :H W :, where we have introduced and m is a bare mass parameter. By tuning this mass as a function of the interaction strength m(g 2 ), one must search for a critical line m = m c (g 2 ) where the renormalized mass of the Dirac fermion around k = 0 vanishesm 0 = 0, such that the correlation length fulfills ξ a (i.e. a second-order quantum phase transition). In this case, the physical quantities of interest become independent of the underlying lattice, and one expects to recover the desired continuum QFT. The key question is to analyze if such continuum scale-invariant limit corresponds to the chiral-invariant Gross-Neveu model (1), or if a QFT of a different nature emerges in the IR limit. The answer to this question may depend on the possible phases of the lattice field theory (4) and the different critical lines in between them. Therefore, addressing this question requires a detailed non-perturbative approach using for instance large-N methods on the lattice, or Monte-Carlo methods from lattice field theory. In this work, we will present a detailed large-N analysis of the lattice GNW model, applying it to the prediction of its phase diagram, and benchmarking it with numerical simulations for the N = 1 single-flavour case.
B. Symmetry-protected topological phases for interacting fermions
A wide variety of phases transitions can be understood according to Landau's theory of spontaneous symmetry break-ing [4], which exploits the notion of symmetry and local order parameters to classify various phases of matter. Nowadays, we understand that Landau's theory does not exhaust all possibilities, as one can find different phases of matter within the same symmetry class that can only be connected via phase transitions where the symmetry is not broken. These so-called symmetry-protected topological (SPT) phases cannot be described by local order parameters, but require instead the use of certain topological invariants to characterize their groundstate (e.g. topological insulators and superconductors [31,32]). These topological invariants are in turn related to observables displaying quantized values that are robust with respect to perturbations that respect these symmetries (i.e. the topological numbers can only change via a gapclosing phase transition). Accordingly, these new phases of matter can be organized within different symmetry classes, as occurs for the fermionic topological insulators [9,33]. Despite having a gapped bulk, these insulators display a quantized conductance (e.g. integer quantum Hall effect [34]) related to a topological invariant (e.g. first Chern number [10]). A bulk-edge correspondence allows to understand this topological robustness by the appearance of current-carrying edge excitations through a band-inversion process, corresponding to mid-gap states that are exponentially localized within the boundaries of the material (e.g. one-dimensional edge modes where fermions cannot back-scatter due to disorder [35]).
The connection between SPT phases and LFTs is very natural for three-dimensional time-reversal-invariant topological insulators [31]. Here, the band-inversion process yielding the topological phase leads to an odd number of massless Dirac fermions localized within the boundaries of the material. As emphasized in [36,37], this band inversion can be understood in terms of lower-dimensional versions of domainwall fermions [38], whereby an odd number of Wilson doubler masses change their sign, contributing each with a twodimensional massless Dirac fermion localized at the boundary. In fact, we note that the Wilson-like terms in Eq. (4) arise very naturally in the low-energy description of topological insulating materials in various dimensionalities [32].
Let us now discuss how these topological effects also appear in the non-interacting limits of the GNW model (3)- (5). Here, the band inversion would occur when tuning the bare mass to lie within m ∈ (m π , m 0 ), where we recall that m 0 = 0 and m π = 2/a correspond to the masses of the Wilson fermions. To understand the SPT phase in this LFT, we consider periodic boundary conditions, such that the Hamiltonian in momentum space isH W = ∑ N n=1 ∑ k∈BZ s ψ † n (k)h k (m)ψ n (k), where we have introduced the flavor-independent singleparticle Hamiltonian By a straightforward diagonalization, one finds where ψ † n,η (k), ψ n,η (k) are the creation-annihilation operators of a fermionic excitation with flavor n in the energy band ε ± (k) = ± 1 a ma + 1 − cos ka 2 + sin 2 ka.
This band structure has a non-zero gap for m > 0, yielding an insulating phase. In order to show that this insulator is topological, and an instance of a SPT phase, we note that this band structure has an associated topological invariant that can be defined through the Berry connection A n (k) = i ε n,− (k)|∂ k |ε n,− (k) , where we have introduced the singleparticle negative-energy states |ε n,− (k) = ψ † n,− (k)|0 . In our case (6), the Berry connection can be expressed as which allows to construct a topological invariant, the so-called Zak's phase [39], as the integral of the Berry connection over the Brillouin zone. From Eq. (9), the total Zak's phase ϕ Zak = ∑ n BZ s dkA n (k) can be expressed as where θ (x) is Heaviside's step function. We note that, as occurs with the Chern number and the transverse conductivity in the quantum Hall effect [10], the topological Zak's phase can be related to an observable: the electric polarization [40,41].
Since the groundstate is constructed by filling all negativeenergy states |gs = ∏ k∈ BZ s |ε − (k) , the above integral over the whole Brillouin zone (10) characterizes the topological features of the LFT groundstate. Accordingly, this LFT hosts a SPT phase in the parameter regime m π < m < m 0 for N odd, which coincides with the band-inversion regime introduced above. This regime can be interpreted as the result of a massinversion process, whereby the mass of some of the Wilson fermions gets inverted. This becomes apparent after rewriting the Zak's phase in terms of the N Wilson masses m 0 = m,m π = m + 2/a.
Indeed, a non-trivial topological invariant (i.e. ϕ Zak /2π ∈ Z) can only be achieved when an odd number of fermion doubler pairs display a different mass sign We note that this SPT phase can be identified with a one-dimensional topological insulator in the so-called chiralorthogonal BDI class [9,32,33], which would display zeroenergy modes localized at the edges of the chain for open boundary conditions. Note that this chiral symmetry class is not related to the standard notion of chirality in QCD, which is indeed broken by the GNW model. Instead, it is related to the ten-fold Cartan's classification of symmetric spaces, and its connection to single-particle Hamiltonian via the time-evolution operator [9]. For the non-interacting GNW single-particle Hamiltonian (6), we find that time-reversal T yields T † h −k (m) * T = h k (m) where T = −iσ x σ y = γ 0 , and charge-conjugation C leads to C † h −k (m) * C = −h k (m) where C = iσ z σ y = γ 5 [25]. The combination of these two antiunitary symmetries is called chiral, or sub-lattice, symmetry S = TC, and yields S † h k (m)S = −h k (m) with S = γ 0 γ 5 = γ 1 . To avoid confusion with the chiral symmetry of high-energy physics, which is a fundamental ingredient in low-energy effective descriptions of QCD, and pivotal in our previous discussion of the GNW model, we will refer to the S as the sublattice symmetry. Since T 2 = C 2 = S 2 = +1, the corresponding GNW topological insulator with an odd number N of fermion flavors (12) is in the BDI class.
We note that the ten symmetric spaces that classify the topological insulators/superconductors also correspond to the target spaces of effective non-linear-sigma model describing the long-wavelength properties of the edge/boundary. When such a non-linear-sigma model includes a topological term, the edge modes are robust and evade Anderson localization in the presence of symmetry-preserving disorder [9]. This perspective allows us to understand the difference of N even/odd in the GNW model. For N even, there can be a symmetrypreserving disorder that couples the different flavors of the edge states, leading to scattering/localization and destroying the BDI topological protection. On the other hand, for N odd, at least one of the edge modes will remain robust against interflavor scattering, and thus evade Anderson localization. We note that similar parity effects can occur also in models with more than one fermion doubler in the regime where an even number of Wilson masses gets inverted, as occurs for higherdimensional time-reversal topological insulators [36].
In contrast to the LFT perspective described in Sec. II A, where one is mainly interested in searching for the secondorder quantum phase transitions to recover a continuum limit described by the QFT of interest (1); the study of symmetry-protected topological phases focuses on the topological gapped phases away from criticality. Interestingly, even in the non-interacting regime, the emerging QFTs governing their response to external fields turns out to be very different from the original discretized QFT, and can be described in terms of topological quantum field theories (e.g. Chern-Simons or axion QFTs) [42]. A generic question of current interest in the study of SPT phases is to explore the interplay of topological features and strong-correlation effects as interactions between the fermions are switched on [43].
For the GNW lattice model (3)-(5), the interactions do not modify the symmetry class as ΨΨ → ΨΨ, and Ψγ 5 Ψ → ∓Ψγ 5 Ψ under time-reversal and charge-conjugation transformations, respectively [25]. Accordingly, the quartic terms in Eq. (2), or its chiral extension introduced in Eq. (16) below, do not modify the aforementioned BDI symmetry class. A question of potential interest for both the SPT and LFT communities is the precise determination of the critical lines of the lattice model m c (g 2 ) for non-perturbative interactions. From this knowledge, the LFT community can explore the nature of the continuum QFT in the vicinity of the critical line, while the SPT community may study how the topological phase is modified in presence of interactions. As argued above, a possible tool to study non-perturbative effects could be large-N methods, or Monte-Carlo methods in Euclidean lattice field theory. We remark, however, that the standard Euclidean ap-proach where time is also discretized [28] can lead to qualitative differences of the phase diagram in the (m, g 2 ) plane. Discretizing time introduces additional fermion doublers, which may lead to additional critical lines that are not present in the Hamiltonian approach (3)-(5) with continuous time [44]. Although this is not relevant when one is only interested in the nature of the continuum QFT, it will be of relevance for topological insulators where one is interested in the finite region of phase space with the topological gapped phase. In this work, we will show that special care in the Euclidean lattice formulation is required in order to recover the relevant phase diagram.
For one-dimensional models, the study of lattice field theories in the Hamiltonian approach can be efficiently accomplished using variational methods based on Matrix Product States (MPS) [45]. In this work, we shall confront predictions of the large-N approximation with results from MPS numerical methods for the study of topological insulating phases in the GNW model with Wilson fermions.
C. Cold-atom quantum simulators of high-energy physics
As an alternative to Monte-Carlo numerical methods in lattice field theory, one may follow R. P. Feynman's insight [46], and develop schemes to control a quantum-mechanical device such that its dynamics reproduces faithfully that of the model of interest (i.e. quantum simulation). From this perspective, a very appealing application of the future fault-tolerant quantum computers will be their ability to function as universal quantum simulators [47] that can address complicated quantum many-body problems relevant for different disciplines of physics and chemistry. Prior to the development of quantum error correction and large-scale fault-tolerant quantum computers, one may consider building special-purpose quantum simulators that are designed to tackle a particular family of models. This is the case of cold-atom quantum simulators of lattice models [48,49], where neutral atoms are laser-cooled to very low temperatures in deep optical lattices [50].
In the continuum, neutral-atom systems are typically described by a Hamiltonian QFT, albeit a non-relativistic one [50], with H = d 3 x : (H 0 + V I ) : containing are field operators that createannihilate an atom of the n-th species in the internal state σ . This Hamiltonian contains (i) the kinetic energy for a multispecies gas of Alkali atoms of mass m n , where n ∈ {1, · · · N sp } labels the atomic species/isotope; (ii) the internal energy ε n σ of the atomic groundstate manifold, which typically consists of various hyperfine levels characterized by the quantum numbers associated to the total angular momentum σ ∈ {F, M F }; and (iii) the single-particle terms V σ ,σ n (x x x), which contain the trapping potential that confines the n-th atomic species and, possibly, additional radiation-induced terms that drive transitions between the different atomic levels σ → σ . In particular, we shall be interested in periodic trapping potentials due to the ac-Stark-shift of pairs of retro-reflected laser beams, which will depend on the atomic species, but not on the particular hyperfine level (i.e. state-independent optical lattices). We also consider laser-induced Raman transitions via highly off-resonant excited states. Altogether, this leads to where V n 0,ν is the ac-Stark shift for the n-th atomic species stemming from the retro-reflected beams with wave-vector k L,ν along the ν-axis. Additionally, ω n,ν is the frequency of a residual harmonic trapping due to the intensity profile of the lasers. Finally, Ω n,l σ ,σ is the two-photon Rabi frequency for the Raman transition induced by the l-th pair of laser beams with wave-vector (frequency) difference ∆ ∆ ∆k k k l (∆ω l ), and phase ϕ l .
In addition, at sufficiently low temperatures, the neutral atoms also interact by contact scattering processes leading to where the interaction strengths U n,n σ ,σ depends on the s-wave scattering lengths a σ σ of the corresponding channels, some of which can be controlled by Feshbach resonances [50]. We also note that fully-symmetric interactions between all species can be achieved by using alkali-earth atoms [51], which could be an interesting property for the experimental realization of higher number of flavors N in the Gross-Neveu-Wilson model.
As announced above, in the regime of deep optical lattices V n 0,ν E n R = k k k 2 L /2m n , one can introduce the basis of so-called Wannier functions, which are localized to the minima of the potential, and show that this non-relativistic QFT yields a family of Hubbard-type models with tunable parameters [52,53]. Therefore, by doing controlled table-top experiments with cold atoms, it becomes possible to explore the physics of strongly-correlated electrons in solids, which has opened a fruitful avenue of research in quantum simulations of condensed-matter models [48,49]. More recently, several works have explored the possibility of extending this cold-atom Hubbard toolbox [54] to the quantum simulation of high-energy physics, including relativistic QFTs [36,[55][56][57][58][59], gauge field theories [60][61][62][63][64], theories for coupled Higgs and gauge fields [65,66], and also theories of relativistic fermions interacting with Abelian/non-Abelian gauge fields [67][68][69].
In this work, we shall be concerned with a cold-atom realization of the Gross-Neveu model using a Wilson-fermion discretization (3)-(5). We note that there are cold-atom proposals to implement this QFT (1) with optical superlattices lattices by a different discretization [56], via the so-called staggered fermions [30,44]. Since we are interested in the connection of this model with correlated SPT phases, we will instead focus on the Wilson-fermion approach of Eqs. (3)-(5). Building on previous proposals for the quantum simulation of Wilson fermions [36,[71][72][73], we present in this work a simplified scheme to realize the GNW model using a two-component single-species Fermi gas confined in a one-dimensional optical lattice with laser-assisted tunneling.
III. CORRELATED SYMMETRY-PROTECTED TOPOLOGICAL PHASES IN THE GROSS-NEVEU-WILSON MODEL
A. Phase diagram from the large-N expansion As advanced in the previous sections, our goal is to determine the critical lines of the GNW model (3)-(5) as a function of the coupling strength m(g 2 ) for non-perturbative interactions. We start by developing a large-N expansion for the partition function Z = Tr exp −βH W , where β = 1/T is the inverse temperature for k B = 1. In the continuum, large-N methods were first employed by Gross and Neveu to prove that the groundstate of their eponymous model (1) displays a nonzero vacuum expectation value σ 0 = Ψ(x)Ψ(x) = 0 ∀x, as soon as a non-vanishing interaction g 2 > 0 is switched on [18]. In this way, the discrete chiral symmetry This non-perturbative result can be obtained using functional techniques to calculate an effective action for an auxiliary bosonic σ (x) field, which condenses due to the formation of particle-anti-particle pairs, and acquires a non-zero expectation value σ (x) = σ 0 = 0 in the chirally-broken phase. On the lattice (3)-(4), similar results are recovered in the continuum limit [74,75], provided that the additional bare mass (5) is adjusted to recover the discrete chiral symmetry.
Let us now comment on a generalization of the GNW model, where the above discrete chiral symmetry is upgraded to a continuous one Ψ(x) → I N ⊗e iθ γ 5 Ψ(x), ∀θ ∈ [0, 2π) [18]. This requires a modified four-fermion term In this case, in addition to the σ (x) field, it is natural to introduce an additional bosonic field Π(x), obtaining an effective action for both fields in the large-N limit. In Ref. [76], S. Aoki showed that the large-N results with lattice Wilson fermions lead to a richer phase diagram displaying new regions where a discrete parity symmetry In this case, the particleanti-particle pairs lead to the so-called pseudoscalar condensate Π(x) = Π 0 = 0, which necessarily breaks the parity transformation of the corresponding fermion bilinear due to the vacuum expectation value Interestingly, these results on the chiral GNW model were used to conjecture that these socalled Aoki phases would also appear in the phase diagram of lattice quantum chromodynamics [76]. However, in this context, these Aoki phases are considered as unphysical lattice artifacts not present in the continuum QFT.
In this section, we discuss the role of such Aoki phases in the GN model (16) with a Wilson-type discretization (3)-(5), and their interplay with the topological insulating phases discussed in the previous sections. In the context of symmetryprotected topological phases, such Aoki phases are not artifacts, but become instead physical phases of matter that shall delimit the region of the phase diagram that hosts a correlated SPT phase. Moreover, from the perspective of a cold-atom implementation, these phases might also be observed in future table-top experiments. We also note that the appearance of Aoki phases is not restricted to the GNW model, but also occurs in strong-coupling calculations of U(1) Wilson-type lattice gauge theories [37,77], which can be used to model the strongly-interacting limit of higher-dimensional topological insulators with long-range Coulomb interactions.
We remark that, in the limit of a single fermion flavor N = 1, which is the relevant case for the cold-atom implementation, the four-fermion interactions of Eq. (3) can be rewritten as (17) which follows from a so-called Fierz identity in the language of relativistic QFTs. Accordingly, besides a change in the coupling constant g 2 → g 2 /2, there is no further distinction between the N = 1 GNW model with discrete or continuous symmetry, such that the previous Aoki phases could in principle also occur in this limiting case. However, since their prediction is based on the N → ∞ results, we will have to benchmark large-N methods with other non-perturbative approaches valid for N = 1 (e.g. MPS numerical simulations or a potential cold-atom quantum simulation). Regarding the first approach, and give a detailed comparison of the large-N predictions with the MPS results of the phase diagram.
Continuous time: Hamiltonian field theory on the lattice
Let us first discuss the large-N phase diagram of the GNW model using a functional-integral representation of the partition function with a continuum Euclidean (i.e. imaginary) time τ. Introducing fermionic coherent states by means of mutually anti-commuting Grassmann variables Ψ k (τ), Ψ k (τ), which are defined at each point of the Brilllouin zone k ∈ BZ s and for each imaginary time τ ∈ (0, β ) [78], one can readily express the finite-temperature partition function as where the Euclidean action is is defined in terms of the singleparticle Hamiltonian in Eq. (6). Moreover, V g (Ψ , Ψ) results from substituting the fermion field operators by the Grassmann variables in the normal-ordered interaction (17), which leads to quartic interactions Let us note that the propagator associated to the free part of the action displays two poles at k ∈ {0, π/a} when −ma ∈ {0, 2}, which correspond to the aforementioned Dirac fermions around the corners of the Brillouin zone.
The first step in the large-N approximation is to introduce two auxiliary real scalar fields σ (x), Π(x) with classical mass dimension d σ = d Π = 1, such that the partition function can be expressed as a new functional integral over both Grassmann and real auxiliary fields Ψ] up to an irrelevant constant, such that the thermodynamic properties of the system are not modified by the introduction of the auxiliary fields. The idea is to chose a particular action where the four-fermion terms can be understood as effective interactions carried by the auxiliary bosonic fields. Moreover, assuming that these fields are homogeneous, the new action becomes whereH k = I N ⊗h k , and the fermionic single-particle Hamiltonian now depends on the auxiliary bosonic fields Essentially, the σ field modifies the mass term of the Dirac fermion, and a vacuum expectation value of the former would thus renormalize the fermion mass, resembling the dynamical mass generation of the continuum model. The second step in the large-N approximation is to integrate over the fermionic Grassmann fields, obtaining an effective action for the auxiliary bosons Z = [dσ dΠ]e −NS eff [σ ,Π] . This step can be readily performed since the Grassmann integral is Gaussian, which leads to where L = N s a is the length of the chain, and we have introduced an abbreviation for the integral over momentum and 2π , assuming already the zero-temperature limit which is the regime of interest of this work. Here, the energies of the new fermionic single-particle Hamiltonian ε k (m+σ , Π) have been expressed in terms of the function When the number of fermion flavors is very large N → ∞, the partition function Z = [dσ dΠ]e −NS eff [σ ,Π] with the effective action (22) yields a groundstate obtained from the saddle point equations ∂ σ S eff | (σ 0 ,Π 0 ) = ∂ Π S eff | (σ 0 ,Π 0 ) = 0. Nonvanishing values of σ 0 , Π 0 are related to the breaking of the discrete chiral or parity symmetries discussed above. For instance, the boundary of the aforementioned Aoki phases can be obtained from the self-consistent solution of these saddlepoint equations imposing Π 0 = 0. Using contour techniques for the frequency integrals, and substituting ka → k in the momentum integrals, we can express the pair of saddle-point equations as follows Here, we have used the complete elliptic integrals of the first and second kind as well as the following parameters In general, the solution of the pair of gap equations (24) must be performed numerically, and leads to the critical lines that delimit the Aoki phase (i.e. solid green lines in Fig. 1). These lines can be interpreted as different flows of the bare mass m c (g 2 ) that determine the second-order phase transitions where a scale-invariant QFT should emerge. Note that this figure displays a clear reflection symmetry with respect to the axis −ma = 1. In fact, using the expression of the elliptic integrals in terms of hypergeometric functions [79], it follows that 1)), which can be exploited to show that the gap equations (24) can be rewritten as These gap equations can now be related to the original ones in Eq. (24) under the following transformation which corresponds to the aforementioned reflection symmetry about −ma = 1, and leads to η 0 → −η 0 and θ 0 → θ 0 /(θ 0 −1). Accordingly, there should only be three distinct phases in the regime −ma ∈ [0, 2], with the Aoki phase being completely absent for ma > 0 and ma < −2.
To make a connection to the continuum results [18], and interpret this phase diagram in light of the symmetry-protected topological phases of Sec. II B, we note that a solution to the gap equations (24) can be found analytically in the regime of small interactions and masses g 2 , |ma| 1. In this case, one can assume that η 0 = 1 + δ η 0 with |δ η 0 | 1, and perform a Taylor expansion of Eq. (24) to find that the σ -field acquires the following non-zero vacuum expectation valuẽ The first contribution stems for the perturbative renormalization of the bare mass ma ≈ −g 2 /2π, while the 1/g 2 behavior of the second contribution highlights that the large-N expansion captures non-perturbative effects, recalling the chiral symmetry breaking by dynamical mass generation of the continuum case [18]. We also note that, as the UV cutoff is removed a → 0, the interaction strength must decrease g 2 → 0 to maintain a finite scalar condensate (29), which shows that the continuum GNW model is an asymptotically free QFT. As announced above, such a vacuum expectation value (29) then leads to a small renormalization of the Wilson masses (11),m k →m k (g 2 ), valid in the regime g 2 , |m| 1. We can thus ascertain that the large mass of the fermion doubler will only be perturbed slightly, remaining thus at the cutoff, and maintaining sgn(m π (g 2 )) = +1. Conversely, the sign of the light-fermion mass sgn(m 0 (g 2 )) may indeed change as the interactions g 2 are increased. According to Eq. (12), we can write the topological invariant in this regime as ϕ Zak = 1 2 Nπ 1 − sgn m 0 (g 2 ) , such that the region hosting a correlated BDI topological-insulating groundstate corresponds to the parameter region withm 0 (g 2 ) < 0.
In order to locate this region, we substitute the saddle-point solution (29) into Eq. (20), and perform a long-wavelength approximation |k| < Λ 1/a, yielding the effective freefermion actioñ up to an irrelevant constant. Here, we have introduced , where the single-particle Hamiltonian for a massive Dirac fermion is which allows us to identify the renormalized Wilson mass The leftmost red dashed line of Fig. 1 corresponds to the points where this renormalized mass vanishesm 0 (g 2 ) = 0. We note that this analytical solution matches the lower critical line obtained by the numerical solution of the gap equations (24) remarkably well, even considerably beyond the perturbative regime g 2 , |ma| 1. Following Eq. (32), the area below this line fulfills −m >σ 0 , such that the interacting Dirac fermion has a negative renormalized massm 0 (g 2 ) < 0, leading to ϕ Zak = Nπ and to an SPT phase for N odd.
An analogous behavior can be found in the regime g 2 , |ma+ 2| 1, where the light fermion is around k = π/a, while the heavy one corresponds to k = 0 (i.e. the Wilson fermions interchange their roles). Using the previous symmetry (28) to locate the critical linem π (g 2 ) = 0 in this parameter regime, we can readily predict the value of this renormalized mass m π = m + 2/a →m π (g 2 ) = m + 2/a −σ 0 . The vanishing of this mass leads to the rightmost red dashed line of Fig. 1, which again agrees very well with the numerical solution of the gap equations. Since the heavy fermion around k = 0 has a large negative mass, the topological invariant becomes ϕ Zak = 1 2 Nπ sgn m π (g 2 ) + 1 , and one can identify the symmetry-protected phase displaying ϕ Zak = Nπ for N odd, with the parameter region fulfillingm π (g 2 ) > 0, and thus −m < 2 −σ 0 (i.e. shaded yellow area below the dashed line).
At larger couplings and intermediate masses, we must resort to the numerical solution of the gap equations, and search for a region of phase diagram that can be adiabatically connected to these two areas hosting a topological phase. This is precisely the shaded yellow lobe of Fig. 1, which is separated from other phases by a gap-closing line. The area above these lines, given bym 0 (g 2 ) > 0, andm π (g 2 ) < 0, determines a regime where both renormalized Wilson masses have the same sign, such that the gapped phase has no topological features, corresponding either to a trivial band insulator (grey area in Fig. 1), or to the aforementioned Aoki phase where the Z 2 parity symmetry Ψ(x) → I N ⊗ γ 0 Ψ(x) is spontaneously broken (green area in Fig. 1).
Discretized time: Euclidean field theory on the lattice
We now move on to the discussion of the large-N phase diagram of the GNW lattice model using a discretized Euclidean time x 0 = τ. This is the most common formalism in lattice field theory computations [28], and can become the starting point to apply other methods such as Monte-Carlo numerical techniques. As emphasized below, it will be important to understand the connection between the lattice and Hamiltonian approaches, requiring a careful treatment of the continuumtime limit to understand lattice artifacts that can change qualitatively the shape of the phase diagram.
In Euclidean LFT, both space-and time-like coordinates {x ν } ν=0,1 are discretized into an Euclidean lattice Λ E = {x x x : x 0 /a 0 = n τ ∈ Z N τ , x 1 /a 1 = n s ∈ Z N s }, where N τ (N s ) is the number of lattice sites in the time (space) -like direction, and a 0 (a 1 ) is the corresponding lattice spacing. Therefore, a similar discussion to the one around Eqs. (3)-(5) must also be applied to the Euclidean time derivative appearing in the action (18), such that nearest-neighbor hoppings along the timelike direction also appear. Introducing fermionic coherent states on the Euclidean lattice, and their corresponding Grassmann variables Ψ x x x , Ψ x x x , the finite-temperature partition function can be expressed as Here, the action is divided into: (i) the free quadratic term which is expressed in terms of the Euclidean gamma matriceŝ γ 0 = γ 0 ,γ 1 = iγ 1 , and the unit vectors {e ν } of a rectangular lattice; and (ii) the interacting quartic term which is expressed in terms of the chiral matrixγ 5 = γ 5 .
(i) Lattice approach with dimensionless fields: Let us note that, in the lattice Wilson approach [80], it is customary to work with dimensionless fields ψ x x x = √ a 0 + a 1 Ψ x x x , and rewrite the action as follows is expressed in term of dimensionless tunnelings κ ν , and the dimensionless massm. Similarly, the interacting term is obtained from Eq. (35) by substituting the fields Ψ → ψ and the coupling constant g →g by the dimensionless ones.
Since the Grassmann variables must fulfill periodic (antiperiodic) boundary conditions along the space (time) -like directions, one can move into momentum space ψ k k k , ψ k k k , where the dimensionless quasi-momenta belong to the Euclidean Brillouin zone BZ E = {k k k : k 0 = 2π(n τ + 1/2)/N τ , k 1 = 2πn s /N s } = (0, 2π] 2 . Then, one can rewrite the action as where we have introduced S k k k (m) = I N ⊗ s k k k (m), together with the single-flavor action Let us note that, in contrast to the continuum-time free action (18), this Euclidean action leads to a propagator with four poles at k k k ∈ {(0, 0), (0, π), (π, 0), (π, π)} when the bare mass equals −m ∈ {0, 4κ 0 , 4κ 1 , 4(κ 0 + κ 1 )}, each of which corresponds to a long-wavelength Dirac fermion. Accordingly, there is an additional doubling due to the discretization of the the Euclidean time direction (i.e. the extra fermions with k 0 = π shall be referred to as time doublers). At this point, the discussion parallels that of the Hamiltonian formalism of Sec. III A 1 via the corresponding steps for the large-N approximation. First, the auxiliary dimensionless lattice fieldsσ x x x ,Π x x x are introduced, such that the action can be rewritten as Here, we have assumed again that the auxiliary fields are homogeneous, introducingS k k k (σ ,Π) = I N ⊗s k k k (σ ,Π), such that the new single-flavor action can be obtained from Eq. (38) usings k k k (m +σ , Π) = s k k k (m +σ ) + iγ 5Π . The second and third steps are the same, since the action is quadratic in Grassmann fields, and the saddle-point solutions control the large-N limit. In this case, the gap equations can be expressed as which are equivalent to those derived in [74] upon a different definition of the microscopic couplings.
We have solved this system of non-linear equations for different Euclidean lattices with N τ = ξ N s , setting N s = 512 sites in the space-like direction, and using ξ = a 1 a 0 ∈ {1, 2, 4, · · · , 128} to approach the time-continuum limit ξ → ∞ (see Fig. 2). Let us note that the dimensionless tunnelings can be expressed in terms of the anisotropy parameter as κ 0 = ξ /2(1 + ξ ), and κ 1 = 1/2(1 + ξ ). At this point, it is worth mentioning that the number of lattice sites in the timelike direction N τ is also modified in the LFT community to explore non-zero temperatures. In that case, however, the κ ν parameters remain constant as N τ is varied (i.e. the Euclidean lattice is rectangular, but the unit vectors remain the same).
In Fig. 2(a), we represent the solution of the gap equations for the isotropic lattice ξ = 1, such that κ 0 = κ 1 = 1 4 . We note that the characteristic trident-shaped phase diagram is in qualitative agreement with the results of S. Aoki [76]. In order to interpret this phase diagram in terms of the symmetryprotected topological phases, let us recall the distribution of the poles described below Eq. (38). Atg 2 = 0, we observe that the critical points separating the different phases correspond to −m ∈ {0, 1, 2}, which lie exactly at the aforementioned poles signaling the massless Dirac fermions. For −m ∈ (0, 1), the only Dirac fermion with a negative mass is that around we see that ϕ Zak = Nπ for −m ∈ (0, 1), corresponding to the BDI topological insulator for N odd. For −m ∈ (1, 2), the Wilson fermions around k k k = (0, π) and k k k = (π, 0) also invert their masses, leading to ϕ Zak = −Nπ, and yielding again an BDI topological insulator for N odd. These two areas, extend on to the neighboring lobes of Fig. 2(a) using a similar reasoning as the one presented around Eq. (30). Therefore, the whole region below the trident that delimits the parity-broken Aoki phase corresponds to the BDI topological insulator. We note, however, that the black dashed lines in this figure, and subsequent ones, do not follow from the solution of the large-N gap equations, but are included as a useful guide to the eye to delimit the SPT phases. In Sec. III B below, we will show that they indeed correspond to a critical line delimiting the SPT phase of a carefully-defined time-continuum limit. Let us start exploring how this phase diagram changes as the time-continuum limit is approached, and compare the results to those of Fig. 1. In Fig. 2(b), we represent the phase boundaries for an increasing number of lattice sites N τ = ξ N s with anisotropies ξ ∈ {1, 2, 4, 8}. Here, one can observe how the central prong of the Aoki phase separating the topologicalinsulating lobes is split into two peaks, each of which goes in a different direction as ξ is increased. We note that this behavior differs markedly from the finite-temperature studies, which show that the lobe structure disappears completely as N τ is varied [76]. Therefore, the anisotropy in the lattice constants gives rise to a different playground, which must be understood in terms of the symmetry-protected topological phases. Since κ 0 → 1/2, while κ 1 → 0, as the anisotropy ξ → ∞, one can identify the left-moving prong with the pole at k k k = (0, π) with mass −ma → 4κ 1 → 0, and thus approaching the lower left corner. Similarly, the right-moving one can be identified with the pole at k k k = (π, 0) with mass −ma = 4κ 0 → 2 approaching the lower right corner. As a result of this movement, and considering the signs of the corresponding Wilson masses, one finds that the region between these two poles correspond to a situation where both space-(time-) like doublers have a negative (positive) mass, such that the topological invariant vanishes ϕ Zak = 0, and one gets a trivial band insulator. Unfortunately, as the anisotropy increases, the two BDI topological lobes get smaller and smaller, such that the symmetryprotected topological phases vanish as we approach the timecontinuum limit, and the central lobe corresponds to a trivial band insulator (see Fig. 2(c)).
This result seems to be in contradiction with our findings for the Hamiltonian formalism in Fig. 1, which predict that the central lobe should correspond to the correlated SPT phase with ϕ Zak = Nπ. Moreover, since each of the two prongs now contain a pair of massless Dirac fermions, the continuum QFT that should emerge in the long-wavelength limit is no-longer that of the Gross-Neveu model for N flavors, but rather that of the Gross-Neveu model for 2N flavors, which would indeed modify the universal features of the phase transition, and not only the non-universal shape of the critical line. As mentioned at the beginning of this section, the Euclidean approach can lead to lattice artifacts that can modify qualitatively the phase diagram, and a detailed and careful account of the timecontinuum limit is required to understand them. We address precisely this issue in the two following subsections.
(ii) Large-N phase diagrams with rescaled couplings: We have found that one of the problems leading to the apparent contradiction between the phase diagrams is the standard use of dimensionless quantities in the Euclidean lattice approach (36). A detailed derivation of this action, which starts from the original action (33) rescaling the fields, shows that the dimensionless parameters are related to the original ones by the following expressioñ Although apparently innocuous, this rescaling changes qualitatively the shape of the phase diagram (see Fig. 3). In order to understand the main features of this phase diagram, the location of the non-interacting poles will be very useful again. For instance, at g 2 = 0, we note that the pole at −m = 4κ 0 gets mapped into −ma 1 = 4(1 + ξ )κ 1 . Therefore, as the time-continuum limit is approached, this pole tends to −ma 1 → 2 as ξ → ∞, and no longer to the origin. Likewise, both time-like doublers at −m ∈ {4κ 0 , 4(κ 0 +κ 1 )} are mapped into −ma 1 → ∞ in the time-continuum limit. Accordingly, in the region of interest displayed in Fig. 3, these time doublers have a very large positive mass. Inspecting the sign of the corresponding Wilson masses, we can conclude that the region −ma 1 ∈ (0, 2) will host an BDI topological insulator, while a trivial insulator will set in for −ma 1 > 2.
Following a similar reasoning as in previous subsections, we know that these critical points surrounding the topological phase will flow as the interactions are switched on and the σ field acquires an non-zero vacuum expectation value. Accordingly, we identify the lobe of Fig. 3 as the BDI topological insulator that also appeared in the continuum-time Hamiltonian formalism of Fig. 1. Moreover, the universal features are now in agreement as the critical lines are controlled by a single pole, and the long-wavelength limit should now be controlled Let us remark that, although the rescaled solution looks somewhat closer to the Hamiltonian results, there are still qualitative differences in the lattice approach that deserve a deeper understanding. For instance, the phase diagram does no longer display the mirror symmetry about −ma 1 = 1 (28).
(iii) Continuum limit and connection to the Hamiltonian approach: In order to understand these differences, and the connection to the gap equations continuum limit (24), let us consider the original action with dimensional fields (33). Following the same steps as before, one can integrate the fermion fields, Z = [dσ dΠ]e −NS eff [σ ,Π] , finding the following effective action which is the Euclidean lattice version of Eq. (22). Here, we have introduced the corresponding lengths L τ = N τ a 0 , L s = N s a 1 , together with the following function If we now take the limit of N → ∞, the saddle point conditions ∂ σ S eff | (σ 0 ,Π 0 ) = ∂ Π S eff | (σ 0 ,Π 0 ) = 0 lead to the following pair of gap equations, which are equivalent to Eqs. (40)- (41) but using dimensional couplings and dimensional fields, In order to make a connection to the gap equations obtained with the Hamiltonian formalism (24), we should take the continuum limit in the imaginary time direction N τ → ∞, and a 0 → 0, such that L τ = L s remains constant imposing ξ = a 1 /a 0 → ∞. To deal with the additional time doublers mentioned above, let us introduce a UV cutoff Λ τ 1/a 0 , and make a long-wavelength approximation around k 0 ∈ {0, π/a 0 }. We find that the gap equation (40) becomes where we have used the single-particle energies of Eq. (23) and the spatial Brillouin zone, after identifying a = a 1 . We note that the first line of this expression comes from the contribution around k 0 = 0, while the second line stems from the time doublers around k 0 = π/a 0 . We observe that the effective Wilson mass of these doublers becomes very large in the continuum limit m + 2/a 0 → ∞ if one keeps the bare mass m non-zero. Hence, these doublers become very massive, and their contribution to above gap equation should become vanishingly small as described below Eq. (43). To prove that, let us get rid of the cutoff Λ τ → ∞, and use ∑ |k 0 |<Λ τ → L τ ∞ −∞ dk 0 /2π. After performing the integral using contour techniques, we directly obtain 1 g 2 = π −π dk 1 4π 1 (ma + σ a + (1 − cos k 1 a)) 2 + sin 2 k 1 a + π −π dk 1 4π 1 (ma + σ a + (1 + 2ξ − cos k 1 a)) 2 + sin 2 k 1 a , where we have also taken the continuum limit in the spacelike direction. Using the definition of the complete elliptic integrals (25), this equation can be expressed Here, we have used the parameters of Eq. (26), together with which determine the contribution of the time doublers to the gap equation (i.e. second term of Eq. (48)). In the continuum limit, we take ξ → ∞, such thatη 0 → ∞, andθ 0 → 0. This makes K(θ 0 ) → π/2, such that the time-doubler contribution vanishes, and we recover exactly the gap equation of the Hamiltonian approach (24).
The continuum limit of the remaining gap equation (41) follows the same lines: we perform a long-wavelength approximation around the time doublers, let the cutoff Λ τ → ∞, and use contour integration to find Using the definition of the complete elliptic integrals (25), this equation becomes where the contribution of the time doublers is expressed in the second line. In this case, taking the time-continuum limit ξ → ∞, such that E(θ 0 ) → π/2, leads to which contains an additional −g 2 /2 term with respect to the gap equation of the Hamiltonian formalism (24). We thus find that, in contrast to the first gap equation (49), the contribution of the time doublers is no longer vanishing in this case, but can instead be understood as a finite renormalization of the bare mass ma → m r a = ma + g 2 /2.
It is precisely this renormalization which is responsible for the lack of the mirror symmetry (28) in Fig. 3, and its qualitative difference with respect to the Hamiltonian prediction of Fig. 1. These results can thus help us to identify the corresponding mirror symmetry, which is no longer about the vertical line −ma = 1, but instead about −m r a = 1, which corresponds to the red dashed line −ma = 1 + g 2 /2 of Fig. 3.
To study in more detail the onset of this symmetry in the continuum limit, and the quantitative agreement with the Hamiltonian prediction, we plot the phase diagram with the corresponding renormalized mass in Fig. 4, and superimpose the continuum-time prediction of Fig. 1. This figure shows the clear agreement between both approaches, and highlights the importance of performing a careful analysis of the continuum limit in order to avoid Euclidean lattice artifacts that can lead to qualitatively different predictions, even questioning the universal aspects of the emerging QFTs (see Fig. 2). It also highlights the fact that the time doublers, despite becoming infinitely heavy in the continuum limit, can leave an imprint in the non-universal properties of the low-energy phase diagram, such as the particular value of the critical points (see the tilted phase diagram of Fig. 4). From the perspective of the renormalization group, this effect does not come as a surprise, since the time doublers lie at the cutoff of the continuum-time limit of the lattice field theory, and their integration can thus renormalize the parameters of the long-wavelength light-fermion modes. In this case, a careful analysis of the gap equations has allowed us to extract an additive renormalization δ m = g 2 /2a which, as usual in discretized QFTs, depends on the remaining UV cutoff and shows that the bare mass must be fine tuned to a cutoff-dependent value to yield the physical mass of the low-energy excitations.
In order to address this point, we apply large-N techniques away from half filling via the introduction of a chemical potentialμ in the GNW model. Following the orthodox prescription for Euclidean LFTs [83], the hopping term κ ν (1 − sγ ν ) of the Euclidean action S E W (36) is modified to e sμδ ν0 κ ν (1 − sγ ν ), such that time-like hopping is promoted in the forwards direction by a factor eμ , and suppressed by e −μ when hopping backwards. As a consequence, one can study the phase diagram of the GNW model at finite densities by solving the gap equations (40)-(41) with the sum over over the time-like momenta now given by k 0 = 2π(n τ + 1/2)/N τ − iμ.
Moreover, using the Euclidean partition function, one finds that the conserved fermion charge density n q = − ∂ ln Z ∂μ is Settingμ → 0, this quantity becomes proportional to the expectation value of the time-like component of the vector , which is the discretized version of the continuum vector cur- : for Wilson fermions. Therefore, the time-like component is simply related to the fermion density in the continuum limit, and we can readily explore situations away from half-filling n q = 0. Interestingly, while the gap equations (40)- (41) remain symmetrical under the transformation (28) using the renormalized mass (54), the charge density n q (m,g 2 ,μ) has only an approximate symmetry.
We now solve the gap equations (40)-(41) with a dimensional chemical potential µ ≡ ξμ = 0, which yield the phase diagram of Fig. 5(a), where the axes have been rescaled to match those of Fig. 4. We see that, as a consequence of the non-zero-chemical potential, the leftmost and rightmost cusps of the half-filled phase diagram of Fig. 4 split into a couple of cusps each, such that the region hosting the Aoki phase becomes smaller. The decrease of the Aoki phase can be qualitatively understood as follows. For µ > 0, one expects that the charge density n q will eventually rise from the value n q = 0 characterizing the half-filled regime. As a consequence, a Fermi surface will be formed in a certain parameter regime, which consists of two disconnected Fermi points in 1+1d. This has the effect of disfavoring the particle-anti-particle pairing required for the pseudoscalar condensate Π(x) = 0, since the excitation energy for such a zero-momentum pair would be on the order of ∆ε ∼ 2µ. Accordingly, the Aoki phase should shrink as the chemical potential is increased. This is corroborated by the curves for n q (m r ) in Fig. 5(b), which rise above zero only around the borders of the dropletshaped region between the two newly-formed cusps for µ > 0 (see the grey regions in Fig. 5(a)). These are precisely the regions where the half-filled Aoki phase has been expelled from. We also note that Fig. 5(b) shows an approximate symmetry about −m r a = 1 which we expect to become exact for ξ → ∞.
Let us now describe how these results can be used to determine, in a controlled way, the extent of the Aoki phase at half-filling. The empty circles of Fig. 5(a) mark the so-called onset, beyond which the ground state has a non-zero charge density (i.e. n q > 0 for µ > µ o (g 2 )). By numerically obtain- ing such onsets for a variety of parameters and time discretizations, we obtain Fig. 6. We note that the variation for finite ξ is probably due to non-universal effects since in the sums over the Brillouin zone of Eqs. (40)- (40), as the chemical potential enters as e.g. sinh(ξ µa). One observes from this figure that all curves come closer in the limit µ → 0, and seem to approach a limit as ξ → ∞. This limiting value corresponds to the point where half-filled Aoki phase terminates, proving that these phase does not extend all the way down to the weak coupling limit, but only survives down to g 2 (µ o = 0) ≈ 0.8 according to the results of Fig. 6. Let us note that extracting this limiting value is numerically hard; for instance, the curvature appears slightly sensitive to temperature, as revealed by calculations on 512 × 1024ξ lattices. However, our approximate prediction g 2 (µ o = 0) ≈ 0.8 is consistent with the cusps of Fig. 4, where the Aoki phase terminates. Fig. 6 therefore strengthens our belief in the existence of a tricritical point at non-zero g 2 (µ o = 0); for couplings below this value there is a direct transition between trivial and topological insulating phases as m is varied, and no parity breaking Aoki phase is encountered in the middle.
So far, we have used the large-N results for a non-zero chemical potential to extract features of the half-filled phase diagram by taking the limit µ → 0 in a controlled manner. However, we note that another interesting question would be to study the fate of the symmetry-protected topological phases, and the appearance of other new phases of matter, in the GNW model away from half filling. In that respect, we note that our large-N results point towards the appearance of a new phase (i.e. droplet-shape region of Fig. 5). Since we have argued that the finite charge densities appear due to the formation of a Fermi surface, it is reasonable to expect that such densities will not drop abruptly to zero as we move away from the critical line. In that sense, the droplet-shaped region may either correspond to a metallic phase where the Fermi points occur at different momenta as the microscopic parameters are modified, or maybe to a kind of charge-density-wave where the fermionic density forms a regular periodic pattern. Un-derstanding the nature of this phase lies outside of the scope of the present work, and will be the subject of a future work. We advance at this point that the density-matrix renormalization group methods discussed in the following section could be adapted to study situations away from half filling, and are a potential tool to address the nature of this new phase. Moreover, we also note that the sign problem for µ = 0 can be safely avoided for any discretized Gross-Neveu or Nambu-Jonal-Lasinio models, such that Monte Carlo techniques [84] could also be applied to the present problem, and extensions thereof.
B. Large-N benchmark via matrix product states
In this section, we test the above large-N prediction for the single-favor GNW lattice model using numerical routines based on matrix product states (MPS) [14] (i.e. a variational version of real-space density-matrix renormalization group method [85]). On the one hand, this can be considered as the most stringent test of the validity of the large-N approach, as we are indeed very far from the large-N limit. On the other hand, the choice of N = 1 is also motivated by the fact that the single-flavor GNW model can be realized in cold-atom experiments following the scheme of Sec. II C described below. Note that the N = 1 flavor of the continuum Gross-Neveu QFT (2) with an additional mass term corresponds to the socalled massive Thirring model [86]. The discretization of this QFT using the Wilson approach allows us to discuss the occurrence of symmetry-protected topological phases in this LFT, and use it to benchmark the large-N predictions for the phase diagram of the GNW model with a finite number of flavors.
High-energy physics to condensed matter mapping
We consider the GNW lattice Hamiltonian (3)-(5) for a single fermion flavor N = 1. By performing a U(1) gauge transformation to the spinors Ψ(x) → e −i π 2a x Ψ(x), which can be understood as an instance of a Kawamoto-Smit phase rotation in LFTs [87], and using the algebraic properties of the gamma matrices, we can rewriteH W →H W = a∑ x∈Λ :H W :, wherẽ In this notation, the Hamiltonian looks similar to the Hubbard model [88], a paradigm of strongly-correlated electrons in condensed matter [89], with an additional spin-orbit coupling. Note that this formulation only differs from Eqs. (3)-(5) on the particular distribution of the complex tunnellings, which can be understood as a gauge transformation on a background magnetic field maintaining an overall π-flux. Indeed, the defining property of the above Kawamoto-Smit phases is that they yield a π-flux through an elementary plaquette. In order to understand the origin of this magnetic flux, let us introduce the following notation for the Dirac spinor Here, the dimensionless fermion operators c i, depend on a spinor index ∈ {u, d} that can be interpreted in terms of the upper ( = u) and lower ( ∈ d) legs of a synthetic ladder, and i ∈ {1, · · · , N s } labels the positions of the rungs of the ladder (see Fig. 7(a)). Considering our particular choice of gamma matrices γ 5 = σ x , γ 0 = σ z , the corresponding HamiltonianH W for the chosen Wilson parameter r = 1 can be rewritten as where we have introduced s = +1 (s = −1) for the upper = u (lower = d) leg of the ladder, and¯ = d, u for = u, d. As can be seen in Fig. 7(a), there is a net π-flux due to an Aharonov-Bohm phase that the fermion would pick when tunneling around an elementary plaquette.
In particular, Eq. (57) can be understood as a generalized Hubbard model on a ladder corresponding to the imbalanced Creutz-Hubbard model [71], which is an interacting version of the so-called Creutz ladder [90,91]. The first line in Eq. (57) describes the horizontal and diagonal tunneling of fermions with strengtht = 1/2a, which are subjected to an external magnetic π-flux threading the ladder (see Fig. 7(a)). One thus finds that the UV cutoff of the GNW model Λ = 1/a is provided by the maximum energy within the band structure Λ = 2t of the Creutz-Hubbard model. Likewise, one understands that the first term in the second line of Eq. (57) corresponds to an energy imbalance between both legs of the ladder ∆ε/2 = (m + 1/a), and yields a single-particle Hamiltonian in momentum space that is similar to Eq. (6), namely Finally, the last term of Eq. (57) amounts to a Hubbard-type density-density interaction V v n n,u n n,d between fermions residing on the same rung of the ladder, which repel themselves with a strength V v = g 2 /a. According to this discussion, the high-energy-physics GNW lattice model is gauge equivalent to the condensedmatter imbalanced Creutz-Hubbard model. Similarly to the high-energy physics convention of working with dimensionless parameters m/Λ = ma and g 2 , the condensed-matter community normalizes the couplings to the tunneling strengtht, such that the exact relation between the microscopic parameters of these two models is Let us also note that, in the condensed-matter context, the lattice constant d of the model (57) is fixed by the underlying Bravais lattice of the solid, which is typically set to d = 1 in the calculations (58) (i.e. lattice units). Note, however, that this does not preclude us from taking the continuum limit. In this case, the continuum limit corresponds to the low-energy limit, wheret = 1/2a (i.e. UV cutoff) is much larger than the energy scales of interest. By setting the model parameters in the vicinity of a second-order quantum phase transition, the relevant length scales fulfill ξ l d, and one recovers universal features that are independent of the microscopic lattice details, and can be described by a continuum QFT. In this section, we exploit the above mapping (59) to explore the phase diagram of the N = 1 lattice GNW model by importing some of the condensed-matter and quantuminformation techniques described in [71]. In particular, we will use the numerical matrix-product-state results to benchmark the large-N predictions. We remark that this mapping also becomes very useful in the reverse direction, as certain aspects of the Creutz-Hubbard model become clarified from the high-energy perspective of the GNW model.
In the parameter regime ∆ε/4t < 1, which corresponds to a bare mass −ma ∈ [0, 1], we found that the imbalanced Creutz-Hubbard model displays three distinct phases: an orbital paramagnet, an orbital ferromagnet, and an SPT phase [71]. The orbital paramagnet corresponds to a gapped phase of matter that is characterized by the absence of long-range order and any topological feature. Therefore, this phase should correspond to the trivial band insulator of the GNW model in Fig. 1.
The orbital ferromagnet, on the other hand, is a phase displaying an Ising-type long-range order due to the spontaneous breaking of a discrete orbital symmetry. Accordingly, it should correspond to the parity-broken Aoki phase of the Gross Neveu model in Fig. 1. To show this correspondence in more detail, let us comment on the orbital magnetization introduced for the Creutz-Hubbard ladder T 0 = T y i = 1 2 ic † i,u c i,d + c.c. = 0 ∀i, and show that it is related to an order parameter of the GNW model. The parity symmetry of the GNW model that is broken in the Aoki phase, namely Ψ(x) → ηI N ⊗ γ 0 Ψ(−x) with |η| 2 = 1, corresponds to c i,u → ηc N s −i,u , and c i,d → −ηc N s −i,d in the Creutz-Hubbard ladder. Hence, one finds that T 0 = T y i → − T y N s −i = −T 0 is spontaneously broken by the orbital ferromagnet. We thus see that, in the language of the synthetic Creutz-Hubbard ladder, the pseudoscalar condensate corresponds to an Ising-type ferromagnet with a non-zero orbital magnetization T 0 = −Π 0 a/2 = − Ψiγ 5 Ψ a/2. This connection also teaches us that one can perform a rigorous finite-size scaling of the pseudoscalar condensate to obtain accurate predictions of the critical lines enclosing the whole Aoki phase, instead of using the various mappings discussed in [71].
Finally, as shown explicitly in [71], the Creutz-Hubbard ladder also hosts a correlated SPT phase, which displays a double-degenerate entanglement spectrum [92] due to a couple of zero-energy edge modes. This phase should thus corresponds to the BDI symmetry-protected topological phase of the GNW model discussed throughout this work (see Fig. 1). Let us remark, however, that the topological insulator of the Creutz-Hubbard model lies in the symmetry class AIII, breaking explicitly the time-reversal T and charge-conjugation C symmetries, yet maintaining the sublattice symmetry. According to our discussion below Eq. (12), we see that the Creutz-Hubbard single-particle Hamiltonian breaks T: γ 0 (h CH −k ) * γ 0 = h CH k , and C: γ 5 (h CH −k ) * γ 5 = −h CH k , explicitly. On the other hand, the combination S = TC yields (γ 1 ) † h CH k γ 1 = −h CH k , such that the topological insulator of the Creutz-Hubbard ladder is in the AIII symmetry class. Therefore, the last element of our high-energy physics to condensed-matter dictionary is the mapping between the symmetry classes BDI ↔ AIII, which is a direct consequence of the above local gauge transformation/Kawamoto-Smit phase rotation. Although differences will arise regarding perturbations that explicitly break/preserve the corresponding symmetries (e.g. disorder), the phase diagram of the translationallyinvariant GNW model should coincide exactly with that of the Creutz-Hubbard ladder provided that one uses the relation between microscopic parameters in Eq. (59).
With this interesting dictionary for the correspondence of phases, and the microscopic parameter mapping in Eq. (59), we can use numerical matrix-product-state simulations, extending the parameter regime studied in [71] from ∆ε/4t < 1 to −1 < ∆ε/4t < 1. In this way, we can explore the full phase diagram diagram of the N = 1 GNW model, and compare it to our previous large-N predictions for −ma ∈ [0, 2]. Let us recall that the large-N approach fulfills (28), such that the obtained phase diagrams have a mirror symmetry about −ma = 1. However, it is not clear a priori if this symmetry is a property of the model, or if it is instead rooted in the approximations of the large-N prediction. We will be able to address this question with our new matrix-product-state simulations.
In Figs. 8 (a)-(b), we discuss a representative example of the finite-size scaling of the pseudoscalar condensate Π 0 = Ψiγ 5 Ψ for the transition between the trivial, or topological, band insulators and the Aoki phase. One clearly sees that the matrix-product-state numerical simulations for different lengths display a crossing that gives access to the critical point (main panel of Figs. 8 (a)-(b)), and that the data collapse of (inset of Figs. 8 (a)-(b)) corroborates that this critical point lies in the Ising universality class. Note that the pseudoscalar condensate gives no information about the phase transition between the trivial and topological insulators. In order to access this information, the mapping to the Creutz-Hubbard ladder becomes very useful, as it points to the possibility of using a generalized susceptibility associated to the variation of the scalar condensate σ 0 = ΨΨ with the bare mass χ σ 0 = ∂ σ 0 /∂ m. As shown in the main panel of Fig. 8 (c), this susceptibility diverges at the critical point of the thermodynamic limit, and can be used to perform a finite-size scaling.
Repeating this procedure for various critical points, we obtain the red empty circles displayed in Fig. 9, which are compared with the large-N results of Fig. 1 represented as solid lines. We can thus conclude that the large-N predictions are qualitatively correct, as they predict the same three possible phases, and the shape of the critical lines is qualitatively similar to the matrix-product-state prediction. Moreover, the agreement between the critical lines becomes quantitatively correct in the weak-coupling limit g 2 , |ma| 1, which is the regime where the asymptotically-free Gross-Neveu QFT (1) is expected to emerge from the lattice model. Since both the mass and the interaction strengths are relevant perturbations growing with the renormalization-group transformations, one expects that a continuum limit with physical parameters well below the UV cutoff can be recovered provided that g 2 , |ma| 1. Let us also remark that the matrix-product-state simulations are consistent with the mirror symmetry about −ma = 1 of the large-N gap equations. Therefore, it seems that this symmetry is an intrinsic property of the GNW model, which is easy to understand in the non-interacting limit, but not so obvious in the interacting case. On general grounds, Fig. 9 shows that the large-N prediction tends to overestimate the extent of the Aoki phase, predicting that the spontaneous breaking of the parity symmetry occurs for weaker interactions and smaller masses. This trend could be improved by considering next-to-leading-order (NLO) corrections to the saddle-point solution, and will be the subject of a future study. In this sense, our results suggest that large-N methods from a high-energy context can be a useful and systematic tool to study problems of correlated symmetry-protected topological phases in condensed matter.
We now comment on further interesting features that can be learned from this dictionary, and imported from condensed matter into the high-energy physics context. In Fig. 9, we have highlighted with a semi-transparent orange star the The red circles represent the critical points of the N = 1 Gross Neveu lattice model obtained by with matrix product states. The semitransparent green lines joining these points delimit the trivial band insulator, Aoki phase, and the BDI symmetry-protected topological phase. Note again that this SPT phase corresponds to the AIII topological insulator of the Creutz-Hubbard model .These lines are labelled by N = 1, and by the central charge c ∈ {1/2, 1} of the conformal field theory that controls the continuum QFT at criticality. These results serve to benchmark the large-N predictions for the critical lines, which are represented by dark solid lines, and labelled with "N = ∞". We also include the exact critical point at (−ma, g 2 ) = (1, 4), which is depicted by an orange star, and the strong-coupling critical lines that become exact in the limit of g 2 → ∞, which are depicted by dashed orange lines. The matrixproduct-states predictions match remarkably well these exact results.
critical point separating the topological and Aoki phases at (−ma, g 2 ) = (1, 4). This point corresponds to a Creutz-Hubbard model with vanishing imbalance ∆ε = 0, and strong repulsion V v = 8t. Interestingly, it is precisely at this point that an exact quantum phase transition is found by mapping the Creutz-Hubbard ladder onto an exactly-solvable quantum impurity Ising-type model via the so-called maximally-localized Wannier functions [71]. In this way, one learns that the lattice GNW model can be solved exactly for a particular limit with relatively strong couplings, and that the corresponding quantum phase transition must lie in the Ising class. From a high-energy perspective, the whole critical line separating the topological and Aoki phases should be controlled by the continuum QFT of a Majorana fermion, and not by the standard Dirac-fermion QFT expected at weak couplings (i.e. along the critical line separating the topological and trivial insulators). We have proved this rigorously using the numerical scaling of the entanglement entropy [93], which shows that this criti- with the massless Dirac fermion (see Fig. 10).
Let us now discuss the orange dashed line of Fig. 9, which describes an exact solution that becomes valid in the strong-coupling limit g 2 1. From the parameter correspondence (59), this regime corresponds to the strongly-interacting Hubbard model, where one expects to find super-exchange interactions between the fermions [94]. In this case, these superexchange can be described in terms of an orbital Ising model with ferromagnetic coupling J = −2/g 2 a, and subjected to a transverse magnetic field B = 2(m + 1/a). According to the exact solution of the transverse Ising model [95], the strongcoupling critical line J = 2B corresponds to g 2 = 1/2(ma+1). This line, and its mirror image, have been depicted by the orange dashed lines of Fig. 9, and shows a very good agreement with the numerical critical points of the GNW model at strong-couplings g 2 1. Since the strong-coupling mapping yields a transverse Ising model, we learn again that the corresponding continuum QFT at criticality is that of a Majorana fermion, which is corroborated again by the matrix-productstate scaling of the entanglement entropy yielding a central charge of central charge c ≈ 1/2 (see Fig. 10). Therefore, the condensed-matter mapping teaches us that the GNW lattice model has an exact solution in the strong-coupling limit, and both critical lines delimiting the Aoki phase lead to a continuum limit controlled by a Majorana-fermion QFT. These results show that condensed-matter methods can offer a useful and systematic tool to benchmark large-N methods applied to problems of asymptotically-free LFTs in a high-energy context. In future works, we will study leading order 1/N correc-tions to the present large-N approach, and see how fast they approach the exact and quasi-exact results for the phase diagram discussed in this section.
First of all, the bare tunneling must be inhibited by the gradient t ∆. Then, the inter-leg tunnellings of Fig. 7(b) (crossed black lines) can be laser-assisted by a Raman pair [98], which also leads to the energy imbalance terms (yellow loops). We set (i) the Raman frequencies to ∆ω 1 = (ε ↑ − ε ↓ ) + ∆ + ∆ε/2, and ∆ω 2 = (ε ↓ − ε ↑ ) + ∆ − ∆ε/2, where ∆ε is small detuning, (ii) the two-photon Rabi frequencies (phases) to Ω 1 = Ω 2 =: Ω (ϕ 1 = ϕ 2 =: ϕ), and (iii) the corresponding Raman wave-vectors to ∆k k k 1 · e x = ∆k k k 2 · e x = 0. In a rotating frame, the Raman-assisted tunneling arising from the corresponding v i, j σ ,σ (t) term contributes with which contains precisely the desired crossed tunnellings for ϕ = π/2, and the energy imbalance of Fig. 7(b). In order to engineer the horizontal tunneling of Fig. 7(b) (green lines), we shall make use of a third Raman pair, but this time far detuned from the atomic transition ∆ω 3 (ε ↑ − ε ↓ ). In this situation, when the corresponding laser intensities are weak, the Raman term leads to a crossed-beam ac-Stark shift that can be interpreted as slowly-moving shallow optical lattice that acts as a periodic modulation of the on-site energies where Ω σ is the two-photon ac-Stark shift for each of the hyperfine levels, which can be controlled by tuning the intensity and polarization of the lasers. We set (i) the Raman frequency in resonance with the gradient ∆ω 3 = ∆ (ε ↑ − ε ↓ ); (ii) the Raman wave-vector to ∆k k k 3 · e x = k L,x with respect to the static optical lattice; and (iii) the Raman phase ϕ 3 = 0. In a rotating frame, the atoms can absorb energy from this shallow moving lattice, such that the horizontal tunneling gets reactivated [99,100], according to where we have introduced the n-th order Bessel function of the first class J n (x). According to this expression, we can laser-assist the horizontal hopping with the desired signs of Fig. 7(b) by exploiting the state-dependence of the dressed tunneling rates, and setting This can be achieved, while simultaneously maximizing the dressed tunneling, by setting Ω ↑ = 3∆ ≈ 0.6Ω ↓ . Let us note that the cross-tunneling (62) will also get a multiplicative renormalization due to this periodic modulation (63), which will be proportional to J 0 ((Ω ↑ + Ω ↓ )/∆). This dressing is similar to the effect exploited for the so-called coherent destruction of tunneling [101]. To achieve the relation of the tunnellings of Fig. 7(b), one should modify the Rabi frequency of the Raman beams Ω, such that although we note that there might be other strategies to fulfill both constraints (65)-(66) simultaneously. Altogether, considering also Hubbard interactions, the correspondence between the cold-atom and the Gross-Neveu parameters is As announced at the beginning of this section, this scheme provides a slight simplification over the proposal for the Creutz-Hubbard model [71], which required the use of an intensity-modulated superlattice, instead of the shallow moving lattice (63) already implemented in experiments [100]. At this point, we comment on an interesting alternative that would simplify considerably the cold-atom scheme. As realized recently [73], a different choice of the gamma matrices γ 0 = σ x , γ 5 = σ y , simplifies considerably the tunneling of Eq. (60), since iγ 5 + rγ 0 = 2σ + for a Wilson parameter r = 1, where we have introduced the raising operator σ + = |↑ ↓|. Accordingly, the kinetic energy of the Wilson fermions can be depicted by the scheme of Fig. 7(c). Let us note that the BDI symmetry class can be readily understood by realizing that the synthetic ladder of this figure can be deformed into a single chain with dimerized tunnellings, and thus corresponds to the Su-Schrieffer-Hegger BDI topological insulator [102].
This representation was exploited in [73] to propose a coldatom realization of quantum electrodynamics with Wilson fermions in (1 + 1) dimensions (i.e. Schwinger model). In that case, one should introduce a bosonic species to simulate the gauge field, and exploit the spin-changing boson-fermion atomic scattering to obtain the gauge-invariant tunneling of the lattice gauge theory. In our case, the required experimental tools are already contained in our previous description and, more importantly, can be considerably simplified with respect to the above discussion. The vertical tunnellings of Fig. 7(c) can be obtained from a Raman pair with ∆ω 1 = (ε ↑ − ε ↓ ), whereas the diagonal tunnellings require another pair of Raman beams with ∆ω 2 = (ε ↓ − ε ↑ ) + ∆, but no additional periodic modulations would be required. Therefore, if no additional disorder is to be considered, which could depend on the particular symmetry class and choice of gamma matrices, this later approach should be followed for the cold-atom experiment, as it simplifies the experimental requirements for the quantum simulation of the GNW model.
Let us finally comment on another interesting alternative. The non-interacting Creutz ladder has been recently realized in multi-orbital optical-lattice experiments [103] that exploit two orbital states of the optical lattice to encode the legs of the ladder, and orbital-changing Raman transitions to implement the inter-leg tunnelings. It would be interesting to study the type of multi-orbital interactions [104] that can be generated in this setup, and the possibility of simulating directly the GNW model studied in this work.
IV. CONCLUSIONS AND OUTLOOK
In this work, we have described the existence of correlated symmetry-protected topological phases in a discretized version of the Gross-Neveu model. We have applied large-N techniques borrowed from high-energy physics, complemented with the study of topological invariants from condensed matter, to unveil a rich phase diagram that contains a wide region hosting a BDI topological insulator. This region extends to considerably strong interactions, and must thus correspond to a strongly-correlated symmetry-protected topological phase. We have shown that this phase, and the underlying topological invariant, can be understood in terms of the renormalization of Wilson masses due to interactions (i.e. dynamic mass generation due to a scalar fermionic condensate). This renormalization has been used to find a critical line at weak couplings that separates the topological insulator from a gapped phase that can be adiabatically deformed into a trivial product state (i.e. trivial band insulator). Moreover, we have shown that for sufficiently-strong interactions, a gapped phase where parity symmetry is spontaneously broken (i.e. Aoki phase) is formed due to the appearance of a pseudoscalar fermion condensate. The large-N prediction has allowed us to find the critical line separating the topological insulator from the Aoki phase by studying the onset of the pseudoscalar condensate, and show that it terminates at a tricritical point where all these three phases of matter coexist.
By using both Hamiltonian and Euclidean lattice approaches, we have been able to pinpoint important details that must be carefully considered when taking the time-continuum limit of the lattice approaches, such that standard methods of lattice field theories can be used to describe quantitatively the phase diagram of the Gross-Neveu-Wilson Hamiltonian. In particular, we have described how lattice artifacts can appear in the standard dimensionless formulation of the Euclidean field theory, and how the spurious time doublers, even when residing at the cutoff of the theory, can renormalize the bare parameters and introduce qualitative modifications to the layout of the phases. The results hereby presented will serve as the starting point for the application of other well-established Euclidean lattice techniques to explore the phenomenology of leading-order corrections that appear for finite N.
Motivated by the possibility of implementing a cold-atom quantum simulator of the Gross-Neveu-Wilson model for a single flavor N = 1, which has also been described in this work, we have benchmarked these large-N predictions by means of quasi-exact numerical methods based on matrix product states. In particular, we have shown that the single-flavor model, corresponding to a discretized version of the massive Thirring model, can also be mapped into a condensed-matter Hamiltonian of spinless fermions hopping on a two-leg ladder, and interacting via Hubbard-type couplings. This connection has allowed us to identify the phases of the Gross-Neveu-Wilson model, discussed above, with condensed-matter counterparts that include orbital paramagnets and ferromagnets, as well as a chiral-unitary topological phase. In this way, the matrix-product-state simulations can readily access a variety of observables to determine the position of the critical lines, which show a remarkable qualitative agreement with the large-N predictions that becomes even quantitative in the region where the continuum Gross-Neveu QFT is expected to emerge (i.e. weak couplings). These numerical simulations also prove that the symmetry of the large-N phase diagram holds for N = 1, and should then be maintained at all orders O(1/N α ). Beyond the matrix-productstate simulations, the aforementioned mapping has allowed us to import exact results for the Gross-Neveu-Wilson model in the regime of intermediate and strong couplings, which originate from quantum-impurity and quantum magnetism techniques in condensed matter.
Therefore, we believe that our work constitutes an example of the useful dialogue and exchange of ideas between the high-energy physics, condensed-matter, quantuminformation, and quantum optics communities, stimulating further cross-disciplinary efforts in the future. As an outlook, one can easily foresee that lattice field-theory techniques to study leading-order corrections to the large-N behavior will be very useful to elucidate the mechanism that induces strong correlations in the symmetry-protected topological phase of the Gross-Neveu-Wilson model. Likewise, quantum-information approaches might be useful to understand the entanglement content of those phases, making a connection to the lattice field-theory techniques. As already pointed out by the Euclidean lattice results, new phases of the Gross-Neveu-Wilson model can arise as one moves away from half-filling. It will be very interesting to explore the nature of these phases using some of the high-energy and condensed-matter techniques hereby discussed. We also note that the techniques hereby presented can be generalized to other lattice Hubbard-type models, not necessarily connected to well-known relativistic QFTs. In particular, it will be very interesting to apply them to the study of higher-dimensional models hosting topological phases of matter. In this context, Aoki phases have been identified in the limit of very-strong Coulomb interactions via strong-coupling techniques of lattice gauge theories [37,77]. These results have been used to conjecture the qualitative shape of the phase diagram in the regime of weak to intermediate inetractions [77], which is expected to be more relevant for the understanding of correlation effects in topological insulating materials. | 22,227.8 | 2018-07-09T00:00:00.000 | [
"Physics"
] |
Procalcitonin-guided antibiotic therapy may shorten length of treatment and may improve survival—a systematic review and meta-analysis
Background Appropriate antibiotic (AB) therapy remains a challenge in the intensive care unit (ICU). Procalcitonin (PCT)-guided AB stewardship could help optimize AB treatment and decrease AB-related adverse effects, but firm evidence is still lacking. Our aim was to compare the effects of PCT-guided AB therapy with standard of care (SOC) in critically ill patients. Methods We searched databases CENTRAL, Embase and Medline. We included randomized controlled trials (RCTs) comparing PCT-guided AB therapy (PCT group) with SOC reporting on length of AB therapy, mortality, recurrent and secondary infection, ICU length of stay (LOS), hospital LOS or healthcare costs. Due to recent changes in sepsis definitions, subgroup analyses were performed in studies applying the Sepsis-3 definition. In the statistical analysis, a random-effects model was used to pool effect sizes. Results We included 26 RCTs (n = 9048 patients) in the quantitative analysis. In comparison with SOC, length of AB therapy was significantly shorter in the PCT group (MD − 1.79 days, 95% CI: -2.65, − 0.92) and was associated with a significantly lower 28-day mortality (OR 0.84, 95% CI: 0.74, 0.95). In Sepsis-3 patients, mortality benefit was more pronounced (OR 0.46 95% CI: 0.27, 0.79). Odds of recurrent infection were significantly higher in the PCT group (OR 1.36, 95% CI: 1.10, 1.68), but there was no significant difference in the odds of secondary infection (OR 0.81, 95% CI: 0.54, 1.21), ICU and hospital length of stay (MD − 0.67 days 95% CI: − 1.76, 0.41 and MD − 1.23 days, 95% CI: − 3.13, 0.67, respectively). Conclusions PCT-guided AB therapy may be associated with reduced AB use, lower 28-day mortality but higher infection recurrence, with similar ICU and hospital length of stay. Our results render the need for better designed studies investigating the role of PCT-guided AB stewardship in critically ill patients. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13054-023-04677-2.
Introduction
Inappropriate use of antibiotics (ABs) has serious adverse effects.As a result, antibiotic resistance is emerging, causing approximately 700,000 deaths worldwide in 2014 and is predicted to be the leading cause of death worldwide by 2050-accounting for 10 million deaths per year [1].Critically ill patients in ICU are at high risk of becoming infected with multidrug-resistant organisms (MDRO) due to their acquired immune deficiency, resulting in unacceptably high morbidity and mortality [2].
In general, more than 50% of critically ill patients are considered as infected.Infection and related sepsis can more than double ICU mortality [3].However, less than 60% of critically ill patients with an initial diagnosis of sepsis are confirmed to be infected [4].Despite the known challenges in the differential diagnosis of infection and sepsis, there is an urgent constraint to administer ABs shortly after the onset of sepsis and septic shock [5].This strategy may inevitably result in unnecessary AB therapy, thus increasing the chance of harm and costs associated with AB treatment.
Procalcitonin (PCT) is one of the most studied inflammatory biomarkers [6] and can distinguish bacterial infections from viral infections in critically ill patients [7,8].There is growing evidence that PCTguided AB therapy can safely reduce antimicrobial consumption-by reducing the number of unnecessary or excessively long therapies.The results of a large, individual patient data meta-analysis in 2017 support the use of PCT in the management of AB stewardship in acute respiratory infections in a variety of clinical settings [9].However, the evidence is less convincing in other types of infection and sepsis.
The PRORATA study was the first big, multicenter RCT to demonstrate the efficacy and non-inferiority of AB management guided by a predefined PCT protocol in septic critically ill patients [10].Subsequent trials conducted in ICUs used an approach identical with or similar to PRORATA, but PCT levels for starting and stopping thresholds varied, patient populations were also heterogeneous with medical, surgical, or mixed populations treated for different types of infections, and therefore the overall interpretation and implementation of PCT-guided AB therapy in ICU setting remains challenging.Moreover, with the implementation of the new Sepsis-3 definition [11], study inclusion criteria for sepsis and septic shock have also changed in the most recent clinical trials as compared to the definitions previously used for decades [12,13].An updated comprehensive analysis of PCT stewardship in ICU setting, including Sepsis-3 patients, was lacking.
Therefore, we aimed to perform a systematic review and meta-analysis of randomized controlled trials (RCTs) that investigated the effects of PCT-guided AB therapy compared to standard of care (SOC) in critically ill patients.
Methods
We report our systematic review and meta-analysis based on the recommendations of the PRISMA 2020 guideline [14] (see Additional file 1: Table S1), while we followed the Cochrane Handbook [15].The protocol of the study was registered on PROSPERO (registration number CRD42022374605), and we adhered to it except for one additional outcome measure (rate of secondary infection) and two subgroup analyses (PCT protocol and patient population).
Eligibility criteria
Applying the PICO (Population, Intervention, Comparator, Outcome) framework, we included RCTs that were conducted in P: adult patients with known or suspected infection treated with antibiotics; I: PCT-guided AB therapy; C: SOC (without PCT use); and they provided data on either of the following, O: length of AB therapy, mortality, rate of recurrent infection (clinically confirmed infection in the same location caused by the same pathogen as the primary one), rate of secondary infection (clinically confirmed infection caused by an organism different from the primary one), length of ICU stay, length of hospital stay and healthcare costs.RCTs conducted in the ICU were included in the quantitative, those conducted in other clinical settings, were included in the qualitative analysis.
Information sources and search strategy
Our systematic search was conducted in three main databases-CENTRAL, Embase and Medline-on November 14, 2022.We used the following search key in all databases: (sepsis OR septic OR infection) AND (PCT OR procalcitonin) AND (antibiotic* OR antimicrobial OR anti-microbial).Conference papers were excluded.
Selection process and data extraction
Selection was performed by two independent review authors (M.P. and N.K.) using a reference management software (EndNote 20, Clarivate Analytics).After automatic and manual duplicate removal, reviewers screened titles and abstracts, then full texts against predefined eligibility criteria.Data were collected independently by two authors (M.P. and M.B.) on a standardized data extraction sheet.We used Google translate for an article in Chinese [16].The following data were extracted in addition to the previously mentioned outcomes: digital object identifier, first author, publication year, countries, centers, study period, study population, sepsis definition, age, gender, PCT protocol, protocol adherence, appropriateness of AB therapy, and exclusion criteria.
Subgroup analysis
We planned to perform subgroup analyses to reduce heterogeneity according to the applied sepsis definitions (Sepsis-1 [13], 2 [12] and 3 [11]), PCT protocol (liberal-stop AB if PCT reduced > 80% of the peak value or < 0.5 ng/mL; and conservative-stop AB if PCT reduced > 90% of peak value or < 0.1-0.25 ng/mL or < 1 ng/mL for 3 days) and patient population (medical, surgical and mixed).We considered ventilator-associated pneumonia (VAP) as pulmonary sepsis.
Risk of bias assessment and evidence level
Three authors (M.P., M.B. and D.T.) performed the risk of bias assessment independently using the revised Cochrane risk-of-bias tool for randomized trials (RoB 2) [17] and GRADE Pro [18] to assess the quality of evidence, with disagreements resolved by another author (C.T.).
Synthesis methods
At least three studies had to be included to perform a meta-analysis.As we assumed considerable betweenstudy heterogeneity in all cases, a random-effects model was used to pool effect sizes.
For dichotomous outcomes, odds ratio (OR) with 95% confidence interval (CI) was used to measure the effect size.Pooled OR based on raw data was calculated using the Mantel-Haenszel method [19,20].For continuous outcomes, the difference between means (MD) was used to measure the effect size.To calculate the pooled difference, the sample size, the mean and the corresponding standard deviation (SD) were extracted from each study.If the SD was not provided, but the standard error (SE) or confidence interval was available, we calculated the SD from it.The inverse variance weighting method was used to calculate the pooled MD.
We used a Hartung-Knapp adjustment if it resulted in a more conservative estimate than without adjustment [21,22].Results were considered statistically significant if the CI did not include the value zero.We summarized the findings for the meta-analysis in forest plots.Where appropriate, we reported the prediction intervals (i.e., the expected range of effects of future studies) of results.Heterogeneity was assessed using Higgins and Thompson I 2 statistics [23].
When necessary and possible, model fitting parameters, and potential outlier publications were explored using different influence measures and plots (e.g., leave-one-out analysis for changes in fitted values, Bujat diagnostics values and plots) as recommended by Harrer et al. (2021) [27].Small study publication bias was assessed by visual inspection of funnel plots and Egger's test (modified Egger's test depends on the type of effect size measures) with 10% significance level [28].
For subgroup analysis, we used a fixed-effects "plural" model (aka.mixed-effects model).We assumed that subgroups had different τ 2 values as we anticipated differences in the between-study heterogeneity in the subgroups, although for practical reasons, if any of the subgroup size was five or less, a common τ 2 assumption was used [29].
Search and selection
Our systematic search resulted in 15,788 eligible articles.After the selection process, 26 articles were included in the meta-analysis [10,16, and 23 articles in the systematic review.The latter included those patients who were treated outside the ICU .Figure 1 shows the PRISMA 2020 Flow diagram of the search.
Basic characteristics of included studies
Baseline characteristics of the included studies are detailed in Table 1.Other relevant information is summarized in Additional file 1: Table S2.
Recurrent and secondary infection
Infection recurrence was observed in 99 out of 2,070 patients in the PCT group and in 75 out of 2,080 patients in the SOC group, indicating a significant difference (OR 1.36, 95% CI: 1.10, 1.68, p = 0.008) (Fig. 4).
Length of ICU stay, length of hospital stay and healthcare costs
Length of ICU stay and length of hospital stay were non-significantly reduced in the PCT group compared to the SOC group (MD − 0.67 days 95% CI: − 1.76, 0.41 and MD − 1.23 days, 95% CI: − 3.13, 0.67, respectively) (Additional file 1: Figures S5 and S6).Due to the highly heterogeneous reporting of healthcare costs, we used a non-comprehensive method in the analysis, with results favoring PCT use (Additional file 1: Figure S7).
Risk of bias and GRADE assessment
Two trials had high overall ROB due to missing outcome data [34,35], whereas 23 trials had some concerns about ROB assessment due to deviations from intended intervention (PCT protocol violations) or the lack of reporting it [10, 16, 30-33, 36-41, 43-53].Only one trial had overall low ROB [42].For assessing publication bias, funnel plots can be found in the supplementary material (Additional file 1: Figure S8 (a-f )).
Certainty of evidence proved to be high for length of AB therapy and 28-day mortality.Moderate results were observed for in-hospital mortality, ICU mortality, rate of recurrent infection and rate of secondary infection, while GRADE was low for length of ICU stay and length of hospital stay and very low for healthcare costs.ROB and GRADE results are shown in the respective forest plots.
Discussion
In our meta-analysis, we analyzed 26 RCTs [10,16, with a total of 9,048 patients, comparing the effects of PCT-guided AB therapy with standard of care on length
Length of AB therapy
Our study confirms the findings of previous meta-analyses [77,78] that PCT-guided AB therapy, including AB cessation rules can significantly reduce the length of AB therapy in ICU patients.An interesting finding in our study was that the three different sepsis definitions had an impact on the results, with significantly shorter AB therapy in the PCT group in the Sepsis-1 cohort and nonsignificant results in Sepsis-2 and 3 cohorts.Although the mean difference was by far the largest in Sepsis-3 patients [30,31,34], the results lacked statistical significance.The relatively low sample size of Sepsis-3 patients compared to other sepsis cohorts could be an explanation for the lack of significant results.On the other hand, five out of nine trials in the Sepsis-2 cohort used conservative PCT protocols, two of them [43,48] demonstrated even longer AB duration in the PCT group, which may also have contributed to the observed smaller effect on AB length in this patient population.We further classified the trials into two subgroups (liberal and conservative) depending on the stopping rule in the PCT group except for three trials [39,51,52] that used a very unique protocol and studies using only starting rules that did not report this outcome [41,46,49].Our analysis overtly suggests that a liberal PCT protocol may result in shorter AB duration compared with a conservative one.Furthermore, protocol adherence was very low (40-50%) in three trials of the group using the liberal protocol [10,35,40], so the difference could have been larger with fewer protocol violations.
Our results show that in mixed populations (the proportion of surgical patients is at least 25%), the length of AB therapy is slightly longer than in medical patients.Apart from one study with different PCT cut-offs for patients during the 48-h postoperative period [44], the trials included used the same protocol regardless of the population.PCT values can be elevated after surgery even in the absence of infection [79], and the use of absolute PCT stopping thresholds in these cases might result in AB overuse.Data on populations including only surgical patients were insufficient for meta-analysis, but pooling data from two surgical cohorts [51,52] results in an even more pronounced reduction in the length of AB therapy.This may be explained by the high absolute stopping threshold (1 ng/mL) used in the study protocols.
28-day, in-hospital and ICU mortality
Our results suggest that 28-day and in-hospital mortality is lower in the PCT group than in the SOC group.However, results are conflicting, as some trials showed survival benefit [30,40], and some others did not [10,43,48].This contradiction may be partially resolved by our results, namely that mortality benefit is only observed in Sepsis-2 and Sepsis-3 patients, medical patients and trials using liberal PCT protocol, all of which are associated with shorter AB duration.Unfortunately, our results do not allow us to explain the relationship between AB therapy duration and mortality.Nevertheless, several studies have shown the potential harmful effects of ABs.These include direct toxic effects and organ injury [80], development of AB resistance and potentially higher chances of secondary infections, mostly caused by MDRO [1], mitochondrial dysfunction associated with ABs [81] and injury and collapse of the microbiome [82].Moreover, an initial low PCT value can help the differential diagnosis, thereby optimizing patient care and reducing mortality.
Recurrent and secondary infections
Theoretically, too short course of ABs could risk infection recurrence, while overuse of ABs is a risk for secondary infections.Our data show significantly higher recurrence of infection in the PCT group, which contradicts the latest meta-analysis [77]; however, they included mostly non-ICU patients with respiratory tract infections.We share the view of the open-label SAPS trial group [40] that bias cannot be excluded, as clinicians might think
Length of ICU stay, length of hospital stay, healthcare costs
Higher infection recurrence rate did not result in excessive ICU and in-hospital stay in the PCT group, which is consistent with the previous meta-analysis in septic ICU patients [78].Despite the high heterogeneity in cost-effectiveness reports, our results suggest that PCT guidance at least does not appear to be inferior to SOC, but further research is needed to draw firm conclusions about this outcome.
In studies on respiratory tract infection, AB use was either reduced in the PCT group or similar between study arms with no difference in adverse outcomes.
In patients with peritonitis, Mahmutaj et al. reported a significant reduction in AB use in the PCT arm without an elevated risk of infection recurrence [59].Slieker et al. in a similar trial reported no adverse outcomes associated with non-significantly reduced AB treatment duration [60].An approach based on PCT and pyuria in UTI patients [65] reduced AB exposure by 30% without adverse effects, whereas in febrile neutropenia, [63] PCT had no effect on AB use.
Strengths and limitations
To the best of our knowledge, this meta-analysis contains the largest number of studies to date, all of which are RCTs.We are also the first to perform subgroup analyses based on sepsis definitions, patient populations, and PCT protocols: our results provide some support that recruiting patients into studies according to the Sepsis-3 definition may have an impact on outcomes; that surgical and medical patients may require separate treatment protocols; and conservative guidance is not superior to a liberal strategy.Finally, we rigorously followed all Cochrane Collaboration guidelines, thereby ensuring maximum quality, transparency, and reproducibility of the results.
Our meta-analysis has certain limitations.First, in the control arm, SOC was not "standardized" as different AB guidelines were applied in different institutions that could potentially result in longer duration of AB therapy in some regions, thus overestimating the effect of PCT guidance.Second, "PCT guidance" does not mean a standard approach, as studies applied different PCT protocols: 16 out of 26 included studies used PCT protocol to stop ABs, 3 used PCT protocol to start ABs, while 7 used PCT guidance for both starting and stopping AB therapy.Furthermore, not all studies reported on all outcomes.The source of infection varied between the studies and the number of patients with septic shock ranged between 7 and 87%, indicating a huge variability in severity of patient populations on the one hand and, on the other hand, a possible impact on outcomes cannot be excluded according to the 15 studies reporting PCT protocol adherence, which ranged between 44 and 97%.Furthermore, AB appropriateness could have an important effect on outcome.However, we do not know whether patients received appropriate or inappropriate ABs in the same or similar proportion in the PCT-guided and control groups as this outcome was only reported in 5 studies in which the groups were well balanced in this regard [10,30,35,45,50], but we still cannot draw conclusions on this topic.Finally, almost all studies excluded immunocompromised patients in their medical history; therefore, the generalizability of our results is limited.
Implications for practice and research
The rapid application of scientific results is of utmost importance [83,84].Our results suggest that PCT-guided AB management could reduce the length of AB therapy in ICU patients, especially in countries and institutes where routine AB administration exceeds 7 days.
The current sepsis guideline [5] recommends against the use of PCT and clinical evaluation to decide when to start AB therapy in septic patients.However, we believe that further research is needed in this field, especially to evaluate PCT kinetics (i.e., changes in 12-24 h) compared to protocols based on a fix value (i.e., 0.5 ng/mL as cut off ) [79,85].Furthermore, the increased rate of recurrent infections, the difference between medical and surgical patients and finally testing whether a liberal or a conservative regime is more beneficial should also deserve further investigations.We also suggest that in future trials, "organ support free days" should be used as the primary outcome [86] rather than mortality, which is affected by a number of confounding factors during the full course of a critical illness; therefore it may not necessarily reflect the efficacy of a particular intervention.Finally, we need data on immunocompromised patients who may also benefit from this approach.
Conclusion
PCT-guided AB therapy may be associated with reduced AB use, lower 28-day mortality but higher infection recurrence, with similar ICU and hospital length of stay.Our results render the need for better designed studies investigating the role of PCT-guided AB stewardship in critically ill patients.
Fig. 1
Fig. 1 PRISMA 2020 flowchart representing the study selection process
Fig. 2 Fig. 3
Fig. 2 Forest plots representing the mean difference in length of AB therapy in A sepsis subgroups, B PCT protocol subgroups, and C patient population subgroups
Fig. 4 Fig. 5
Fig. 4 Forest plot representing the odds of recurrent infection
Table 1
Characteristics of studies included
Table 1
(continued)presented as mean ± SD, median (IQR), median [range], b patients treated on wards under advanced supportive care because shortage of ICU beds, c costeffectiveness analysis of de Jong et al., 2016; abbreviations: AECOPD-acute exacerbation of chronic obstructive pulmonary disease, CAP-community acquired pneumonia, ED-emergency department, ICU-intensive care unit, LRTI-lower respiratory tract infection, NA-not available, UTI-urinary tract infection, VAPventilator-associated pneumonia a | 4,720.8 | 2023-10-13T00:00:00.000 | [
"Medicine",
"Biology"
] |
Chronological Review of the Catalytic Progress of Polylactic Acid Formation through Ring Opening Polymerization
The disposal of a large amount of polymer waste is one of the major challenges of this century. Use of bio-degradable polymers obtained from sustainable sources presents a solution to this problem. Poly lactic acid (PLA), a bio-degradable polymer, can be synthesized from sustainable sources as corn, starch, sugarcane and chips. Ring opening polymerization (ROP) of Lactide (LA) monomer using metal/bimetallic catalyst (Sn, Zn or Al) is the preferred method for synthesis of PLA. However, the PLA synthesized using such catalysts may contain trace elements of the catalyst. Review Article Dubey et al.; IRJPAC, 12(3): 1-20, 2016; Article no.IRJPAC.27469 2 These catalyst traces are known carcinogens and as such should be (ideally) eliminated from the process. Use of the organic catalyst instead of metallic catalysts, may be one of the prominent solutions. Organic catalysts require the higher energy of activation for the ROP reaction of LA. Such energy requirement can be achieved through the application of alternative energy during the reaction. Alternative energy sources such as LASER, Ultrasound and microwave are prominent options to implement and process the ROP of PLA. This paper is an effort to emphasize the chronological review and to establish the current state-of-the-art in the field of PLA research.
INTRODUCTION
The disposal of a large amount of polymer waste mainly the daily use consumer product is one of the major challenges of this century.Use of biodegradable polymers obtained from sustainable sources emerges as a promising solution to environmental waste.However, substituting traditional polymers derived from petroleum products with a bio-degradable polymer is not sufficient to overcome all the disadvantages of using petroleum based polymers.Large scale industrial production of biopolymers can introduce impurities into the finished product that can be toxic to the end user and harmful to the environment.For example, use of metallic or bimetallic catalysts for the synthesis of biopolymers may lead to industrial scale throughput, but it also creates some serious health issues like carcinogenic effect [1][2][3][4][5][6][7][8][9].Hence, research efforts need to be directed to eliminate the use of such catalysts from the production process.
Ring opening polymerization (ROP) of LA monomer is one of the most preferred methods for synthesis of Poly(lactic acid) PLA.The state of the art techniques developed by Dubois et al., for the ROP of LA is based on the metallic and bimetallic catalyst (Sn, Zn, and Al) in suitable solvents.This process leads to throughputs in the range of 30-40 kg/hr, making production scalable and cost effective.However, as the metallic and bimetallic catalysts were established to be carcinogenic, there is an urgent need to explore suitable alternatives.
The metallic catalysts aluminium isopropoxide Al(OPr) 3 zinc lactate (C 6 H 10 O 6 Zn) and stannous octoate (Sn(Oct) 2 ) etc. typically used in the production of PLA result in the production of a nucleophile or electrophile, which initiates the polymerization process.Replacing the metallic catalysts with non-metallic catalysts or some other metal-free source makes the production 'inefficient' because of low activation capacity of non-metallic catalysts.As a result, the throughput obtained from such a process is well below the requirements for industrial scale production (30-40 kg/hr).Kamber et al. [5] Wang et al. [6] and Basaran et al. [7] etc. reported the use of organic/metal-free catalysts in the production of biopolymers.These studies conclude that although the ROP of LA is possible, the production rate was much less (2-3 kg/hr) compared to metallic one (20-40 kg/hr) for consumer product standard.
ROP OF MONOMERS THROUGH METAL/ORGANIC CATALYST
The basis of the ROP is opening the cyclic ring of monomers such as cyclic ether, amides (lactams) and esters (lactones).Then the opened ring acts as an active centre where other monomers join to form a larger polymer chain through ionic propagation.ROP consists of a sequence of initiation, propagation and termination reactions [12].Once the process is started by the initiator, monomers add to the active polymer chain and increasing the chain length.A wide variety of cyclic monomers has been successfully polymerized by using ROP such as cyclic olefin, amines, sulphides etc.The suitability of polymerization of cyclic monomer depends on both thermodynamic and reaction kinetic factors [12].
THERMO-DYNAMIC CONDITIONS FOR ROP OF CYCLIC MONOMERS
The thermodynamic factors such as binding energy, entropy, and enthalpy of bond breaking and formation process play a very significant role in determining the relative stability of cyclic monomer and linear polymer structures during the conversion of cycloalkanes to the corresponding linear polymer [13].ROP is thermodynamically favourable to all except the 6membered ring.The order of thermodynamically favoured cyclic structure is 3 > 4 > 8 > 5 > 7 atoms.The trend is a result of bond angle strain in 3 and 4-membered rings, eclipsed conformational strain in the 5-membered ring and trans-annular strain in 7 and 8-membered ring [12].Thermodynamic parameters alone do not guarantee the polymerization of the cyclic monomer as shown in the case of the 6membered ring.Polymerization also requires a favourable kinetic pathway to open the ring and undergo reaction.For nucleophilic or electrophilic attack by an initiator, existence of a heteroatom in the ring provides a suitable site resulting in further propagation by opening the ring.Such monomers polymerise when both thermodynamic and kinetic factors are favourable for the reaction [12].
ROP OF LACTIDE AND POLY-CONDENSATION OF LACTIC ACID FOR PLA SYNTHESIS
Based on the production rate and molecular weight, the two most efficient and common methods to produce PLA are polycondensation of lactic acid and ring opening polymerization of LA (Fig. 1).Poly-condensation involves removal of water of condensation by the use of solvent under high vacuum and high temperature.In general, it is the least expensive route, but in a solvent-free system, it is difficult to achieve high production rates (30-40 kg/hr).To manufacture a low to intermediate molecular weight polymer, the poly-condensation approach was used by Carothers and is still used by Mitsui Toatsu Chemicals Inc. [14,15].
Fig. 1. Lactide and lactic acid monomer to form PLA polymer
The other efficient way to obtain PLA is the ROP of LA with suitable metal catalysts (tin, zinc, aluminium) in a suitable solvent [36].Normally, the entropically favoured LA monomers are produced at the temperature below 180-200 o C [19].Based on cost and reaction time, ROP is preferred process for large-scale industrial production.Catalysts like Al(iOP) 3 , C 6 H 10 O 6 Zn and Sn(Oct) 2 have been studied to enhance the ROP reaction productivity [36].To get the higher molecular weight, use of other coupling agents or esterification-promoting agents are required, which increase cost and complexity [36,37].However, obtaining high molecular weight polyesters with good mechanical properties is not easy.Among them, stannous octoate (Sn(Oct) 2 ), approved by the U.S. Food and Drug Administration (UFDA), is the most widely used due to high LA polymerization feature [38].
ROP Mechanism of Lactide
ROP is generally initiated by the attack of ionic (anionic or cationic) initiator on the cyclic ring monomer to create an active site for further addition of monomer(s) [16][17][18]21,22].Reaction involves the nucleophilic attack of monomer on the oxonium ion Fig. 2.
In the case of active species determination, Kricheldorf et al. [36] reported that tin halogenides were actually converted into tin alkoxide and behave as real active species.Several ring-opening polymerizations proceed as living polymerizations and polymer molecular weight increases linearly with conversion and ratio of monomer to initiator [6].preferred way of polymerization because it results in polymers with high polymer molecular weight [7,20,24,29].
CHRONOLOGICAL SCIENTIFIC RESEARCH DEVELOPMENT IN PLA SYNTHESIS FIELD
PLA as biocompatible and biodegradable class of polymer has garnered a lot of attention in past decades and has also resulted in industrial scale production at manufacturing units such work (USA) [33,39], Purac (Netherlands) [40], etc.In fact, the metal catalyst (in a suitable solvent) has been used for the synthesis of PLA for decades.Dubois et al. [41] in 1991 reported the mechanism of ROP of LA using Batch process and aluminium isopropoxide as a catalyst in toluene solvent.Witzke et al. reported reversible kinetics of L-LA in the oven dried container using stannous octoate as a catalyst in toluene solvent, incidentally lead an adverse effect (toxicity, irritation) on human health.Jennifer et al. [43] reported that Sn(Oct) used in the polymerisation of LA is an irritant to the eyes, respiratory system, and skin different reaction stages involved in the polymerization of LA lead to the formation of impurities and side products.The use of Sn(Oct) or another metal catalyst is also very toxic and hazardous for environment although lactic acid and LA are non-toxic in nature [8].
Apart from the catalyst, the reaction time is also an important factor which decides the production cost and quality of commercial PLA.The reported reaction completion time for ROP kinetics by Dubois et al. [8] and others was several hours (50-100).Mehta et al.
SCIENTIFIC RESEARCH DEVELOPMENT IN PLA
PLA as biocompatible and biodegradable class of polymer has garnered a lot of attention in past decades and has also resulted in industrial scale production at manufacturing units such as Nature work (USA) [33,39], Purac (Netherlands) [40], etc.In fact, the metal catalyst (in a suitable solvent) has been used for the synthesis of PLA in 1991 reported of LA using Batch process and aluminium isopropoxide as a .Witzke et al. [42] also LA in the ovendried container using stannous octoate as a catalyst in toluene solvent, incidentally leading to an adverse effect (toxicity, irritation) on human reported that Sn(Oct) 2 used in the polymerisation of LA is an irritant to the eyes, respiratory system, and skin.The different reaction stages involved in the polymerization of LA lead to the formation of impurities and side products.The use of Sn(Oct) 2 or another metal catalyst is also very toxic and hazardous for environment although lactic acid he reaction time is also an important factor which decides the production cost and quality of commercial PLA.The reported reaction completion time for ROP and others was .Mehta et al. [44] reported the variation of the number average molecular weight of PLA with the time scale of 0 by using the experimental data from Dubois et al.Batch process [41].Jacobsen et al. et al. [9] using reactive extrusion for the polymerization of LA proposed the detailed study of ROP process.They observed that in initial few minutes the conversion rate reaches almost 95%, but if the process lasts longer, side reactions (like intermolecular trans and scission) can decrease the polymer molecular weight.For the estimation of monomer conversion and average molecular weight along the extruder, Banu et al. [9] reported the material flow and mixing by using the commercial simulator.
As mentioned earlier, the drawbacks of metal catalyst have resulted in the growth of organo catalysis as a prominent solution for ROP reactions.Basaran et al. experimental technique detailing the synthesis and characterization of polylactide via the metal free process.Consideration of Montmorilloniate K10 (OMt-K10) as organic filler and acid catalyst for polymerization of LA opens a suitable source of catalyst application to overcome impurity caused by metallic catalysts [5] proposed the use of OMt-K10 filler both as the catalyst and inorganic filler to prepare the biopolymer nano-composite of PLA.OMt has a catalytic proton, which can act as an acid catalyst for the polymerization process of LA.As mentioned earlier by several research groups an increase in temperature during the polymerization process, initiates several side reactions which decrease the molecular weight of the resulting polymer [37,43,45].Basaran et al. [5] also verified the effect of temperature and found that beyond 180°C the reaction behaviour started changing and beyond a certain temperature range (180-200°C) the molecular weight started decreasing.Higher temperature like 185-190°C boosts unzipping and scission reactions and leads to the decrease of molecular weight as well as thermal degradation [47,48].However, the basic principle of ROP process is still the backbone for all kind of new development and innovation in this field.
In the last few years, several research communities and groups have investigated the possibility to develop a technique through which the polymerization of LA becomes possible by using the metal-free or organic catalyst.Use of alternative energies such as microwave, and ultrasound source to achieve a precisely ].Jacobsen et al. [2] and Banu using reactive extrusion for the polymerization of LA proposed the detailed study .They observed that in initial few minutes the conversion rate reaches almost 95%, but if the process lasts longer, side ns-esterification and scission) can decrease the polymer molecular weight.For the estimation of monomer conversion and average molecular weight along reported the material flow and mixing by using the Ludovic® As mentioned earlier, the drawbacks of metalcatalyst have resulted in the growth of organocatalysis as a prominent solution for ROP reactions.Basaran et al. [5] reported experimental technique detailing the synthesis polylactide via the metal-.Consideration of Montmorilloniate-K10) as organic filler and acid catalyst opens a suitable source of catalyst application to overcome impurity caused by metallic catalysts [5] ew years, several research communities and groups have investigated the possibility to develop a technique through which the polymerization of LA becomes possible by free or organic catalyst.Use of alternative energies such as microwave, LASER, and ultrasound source to achieve a precisely-controlled and efficient continuous polymerisation of high molecular weight PLA in a twin screw extruder is currently being investigated.To implement the effect of the metal-free catalyst in the reaction mechanism, a proper understanding of reaction kinetics is needed.It is necessary to develop a theoretical mathematical model to check the suitability of parameters (time, concentration, temperature and rate constants) in reaction mechanism for LA polymer
REACTION MECHANISMS FOR PLA FORMATION USING ORGANIC CATALYSTS
To replace the effect of the metallic catalyst as a catalyst for ROP of LA and its adverse effects on the environment, in 2007 Kamber et al. proposed very detailed review on the state of the art of effectiveness of organic catalyst related ROP reactions mechanism and the importance of the organo-catalytic ring-opening polymerization The work examined several methods for initiate the ROP such as cationic, anionic, enzymati and other organic ROP.The initial reaction scheme to understand the reaction for ROP of LA is mentioned in Fig. 3.
Fig. 3. PLA formation routes
The reaction scheme referred to in Fig. 3. describes the reaction procedure well but does not give all the details by highlighting the intermediate steps which control and are ultimately responsible for the molecular weight distribution and the molecular weight of the macromolecules.Once the polymerization reaction starts, it goes on to a final stage with certain molecular weight.As a result, step growth or chain growth processes influence the reaction.The step growth process explains the details of ROP process by providing effective details of intermediate reactions and helps to control the molecular weight of process with the emergence of "living" polymerization or active site reactions [31,39,40,47].
REACTION MECHANISMS FOR PLA FORMATION USING ORGANIC
To replace the effect of the metallic catalyst as a catalyst for ROP of LA and its adverse effects on the environment, in 2007 Kamber et al. [6] the state of the art of effectiveness of organic catalyst related ROP reactions mechanism and the importance of opening polymerization.The work examined several methods for initiate the ROP such as cationic, anionic, enzymatic and other organic ROP.The initial reaction scheme to understand the reaction for ROP of
Fig. 3. PLA formation routes
The reaction scheme referred to in Fig. 3.
the reaction procedure well but does not give all the details by highlighting the intermediate steps which control and are ultimately responsible for the molecular weight distribution and the molecular weight of the macromolecules.Once the polymerization reaction starts, it goes on to a final stage with certain molecular weight.As a result, step growth or chain growth processes influence the reaction.The step growth process explains the details of ROP process by providing effective details of te reactions and helps to control the molecular weight of process with the emergence of "living" polymerization or active site reactions 4 explains the details of several mechanisms insertion mechanism, activated monomer mechanism and monomer activated mechanism versus chain end mechanism.
Fig. 4. Different mechanism for ROP process
In general, the ROP of lactides coordination-insertion mechanism [48].This is different from usual cationic and anionic mechanism in which free ions or ion pairs are the charged propagating species and their counter ion share a covalent bond [80].The coordination insertion mechanism is based on the coordination of the metal catalyst group with monomer ring and further attack of an alcohol group on the week site of monomer ring (Fig. 5).This mechanism shows better production yield and faster reaction times (20-60) min in RO process [49].There is also an alternative classification for enzymatic ROP known as an activated-monomer mechanism.In this process, the enzymes react with monomer and activate the monomer to enhance the polymer chain addition process [40].The field of catalysed ROP was demonstrated in the application of a lipase (a type of an enzyme ; Article no.IRJPAC.27469monomer mechanism and monomeractivated mechanism versus chain end-activated
Fig. 4. Different mechanism for ROP process
In general, the ROP of lactides follows the insertion mechanism [48].This is different from usual cationic and anionic mechanism in which free ions or ion pairs are the charged propagating species and their counterion share a covalent bond [80].The coordinationmechanism is based on the coordination of the metal catalyst group with monomer ring and further attack of an alcohol group on the week site of monomer ring (Fig. 5).This mechanism shows better production yield 60) min in ROP process [49].There is also an alternative classification for enzymatic ROP known as an monomer mechanism.In this process, the enzymes react with monomer and activate the monomer to enhance the polymer chain addition process [40].The field of enzymecatalysed ROP was demonstrated in the application of a lipase (a type of an enzyme catalyst) for ROP of lactones by independent groups of Parmar et al. [35] Kobayashi et al. [40,] and Knani [34].Enzymes show an effective stereo-active reaction and were extracted from renewable resources that can be easily recycled [9].To define the role of a catalyst, classification of the catalytic process of ROP reactions either by chain end-activation or monomer-activation mechanism is crucial.
Several other ligands in organic and non-metallic catalysts are also capable of starting ROP of L (LA), such as pyridine, phosphine and carbines.The first organo-catalyst used in living ROP of Lactide (LA) was reported in 2001, using basic amines such as di-alkyl-amino-pyridine (DMAP) and poly-pyrrole (PPY) as trans-esterification catalysts [50].DMAP was used successfully not only for trans-esterification but also for many other organic transformations such as alkylation, acylation, nucleophilic substitutions, etc [52].Because of these properties, DMAP has been the centre of several reviews [51,53,54].The Fig. 5 describes the mechanism for ROP of LA using a pyridine such as DMAP.
In the case of ROP, the use of different types of carbenes (unsaturated and saturated imidazolylidene and triazolylidene as the effective catalyst, have been investigated [60], [62].The ROP depends on the nature of carbene and monomer.The reported rate of reaction was similar to one of the most active metal catalysts results from ROP of LA [52,[61][62][63].NHCs have higher nucleophilicity and basicity as compared to DMAP, which is responsible for the higher reaction rates.
In the mechanism of polymerization of the lactones using carbene as the catalyst, the termination can be started by deactivation of the carbene introducing acetic acid, CO 2 or CS 2 which later forms the zwitterionic species i.e. a molecule with a positive and a negative charge and can be removed from the polymer by the precipitation process.In the nucleophilic mechanism, production of a zwitterionic intermediate is the key feature [64].Nucleophilic attack of the carbene on the lactones creates zwitterionic species, followed by ROP of tetrahedral intermediate to create the acyl-imidazolium alkoxide zwitterions [65].Figs.7 & The decrease in molecular weight at higher concentration happens because the active monomer chain centre cannot react timely with the monomer owing to the increase in the systems viscosity.The monomer/initiator molar ratio has been examined in details by Wang et al. [7].It was observed that the molar ratio was relatively critical to prepare high yield and high molecular weight [11,12].Because of the availability of less active centre in the polymerization system, the polymerization process cannot continue while decreasing initiator amount.Yet increasing initiator content can favour a large number of shorter polymeric chains, thus decreasing the molecular weight of PLA.The amount of catalyst also affects the formation of active species directly and thus the monomer conversion.Further the polymerization temperature and time also significantly affect the ROP of LA.It was also found that rise in temperature enhances the rate of intermolecular trans-esterification and thermal degradation reaction and causes the decrease in molecular weight of PLA, whereas below 15°C the polymerization reaction proceeds slowly [13].
ROP through Alternative Energies (Microwave, Ultrasound & Laser) Incorporation
Application of microwave heating to process chemical reactions has received increasing attention in the past few years.Due to its qualities such as high efficiency, the capability of uniform heating and reduced reaction time, the large number of chemical reactions, both organic and inorganic, undergo a significant increase in reaction speed due to microwave irradiation compared to the general methods for heatirradiation such as furnace chamber and LASER heating etc. Microwave-assisted ROP of polylactide.
Microwave assisted ROP technology for LA emerges as a green method for chemical synthesis for PLA due to its high efficiency and homogeneous heating [71,75,76,78,79,96].The first microwave-irradiated polymerization of D,L-LA (was reported by Liu et al. [73,74,77]).
Zhang et al. [73] polymerized successfully the D,L-LA by using ethanol, ZnO, SnCl 2 and Cat-A as catalyst applying continuous microwave irradiation in less than an hour.36% yield has been reported on the final synthesis process.On the other hand compared to conventional heating with the same experimental setup, the negligible non-thermal effect of microwave irradiation was revealed.Based on chemical bonding studies, LA molecule contains two polar carbonyl groups which provide a suitable site of dielectric heating by absorbing microwaves, because of this fact Liu et al. [73] investigated the ROP of D,L-lactide (DLLA) considering micro-wave irradiation for PLA synthesis [18].The rough reaction scheme of microwave-assisted ROP of LA is in Fig. 9.
Fig. 9. Microwave-assisted ring-opening polymerization of lactide
For details in reaction procedure, a sample mixture of D,L-LA was prepared and mixed with Sn(Oct) 2 after that the reaction mixture was treated with three vacuum-argon cycles to get rid of solvent.The reaction mixture was then irradiated with power levels of 2.45 GHz through the microwave.The mixture irradiated with microwave then cooled down in dichloromethane and precipitated in methanol.The source of microwave energy has been used as a pulse source to irradiate the mixture for short times.Further the weight ratio of the precipitate of polymer to the monomer was considered to determine the yield of the conversion of the amount of LA into PLA.The precipitate obtained was verified as P-DLLA by means of 1 H NMR spectroscopy and GPC test which were similar to those of an authorized PLA specimen.By measuring the weight average molecular weight (M w ) and yield of the resultant P-DLLA at varied time intervals, the effect of microwave energy on the ROP of DLLA was investigated at different power levels (170, 255, 340, and 510 W).With the help of detailed investigation, Liu et al. [73] proposed that the rate of polymerization step and the chain propagation of PDLLA were both enhanced significantly by an increase in the microwave power up to certain extant [94,95].
Ultrasound/Ultrasonic Facilitated ROP of Lactide
In literature, the use of UV and Ultrasound source is reported as an energy source for degradation of PLA chain.Work based on separation of PLA chain with their complex substituents called poly LA-co-glycolic acid using intense and targeted Ultrasound has also been reported in literature because of which, Ultrasound would be used as a relevant future AE source for ROP process to provide considerable bond breaking energy to initiate the polymerization reaction [81].Deng et al. [83] and Oster et al. [84] demonstrated the application of UV source for surface grafting polymerization in detail.Fig. 10.Dubey el al. [96,97] and coworkers also reveals the benefit of using ultrasound source within the reaction process in great details.The implementation of ultrasound source highlighted the major benefits in the reaction process and reaction output.
ROP through Continuous Reactive Extrusion
There are several polymer processing techniques like melt-blending, polymerization, branching, grafting and functionalization.Reactive extrusion (REx) is a cost effective method for these techniques on account of its low cost production and processing techniques [85,86].Important features of extrusion polymerization are as following, Melt processing can be carried out in the solvent free medium.So the product can be easily isolated. There is a continuous process which starts from monomer and gets completed with the formation of polymer or final product.
Residence time and residence time distribution can be controlled Several extrusion streams can be incorporated while running process
There are several benefits of the extruders in different applications such as deciding the grade of the mixing, proper control and higher conversion output.The effectiveness of the extruder is controlled by the geometry of the screws which are used in the extruder [86].
Residence time, power input and the scale of mixing to form melt in the extruder all are controlled by the screw size, geometry and rotating speed.
Various types of high molecular weight biodegradable polymers are prepared by ROP of cyclic ester through extruder polymerization such as poly(ε-caprolactone) (PCL) Fig. 11, polylactides (PLAs) and other aliphatic-aromatic poly-condensates as poly(butylene adipate-coterephtalate) [82].By using reactive extrusion for ROP of several polymers at industrial level, it is possible to have control over polymerization, which is required to achieve desired molecular weights and suitable functional end groups [87].
Apart from PCL and PLAs, there are several other biodegradable aliphatic polyesters, for which traditional polycondensation method is used.One of such kind of polymer is poly(alkylene succinate) designed by Showa Denko, trademarked Bionolle® [88].But due to relatively poor mechanical and high cost properties as compared to other polymers (polyethylene and polypropylene), the biodegradable copolyesters are still not very popular.It is possible to combine these biodegradable polymers with cheap inorganic (silicate type particles) or organic fillers (starch granules) to reduce the cost and to optimise the properties of aliphatic polyesters [86].
ROP of PLA using Continuous Extrusion Reaction
The current trend for the production of PLA at commercial level mostly supports the use of a continuous single-stage process reactive extrusion or twin screw extruder technology which satisfies both kinetic and thermal stability requirements [27].For reactive extrusion polymerization, the use of purified form of LA is very significant.A mixture of D or L-LALA, stabiliser and the catalytic system is transferred continuously into a nitrogen purged material feeding unit.The crystalline powdery type LA is mixed with the catalyst such as stannous octoate dissolved in toluene.It is extracted later from the mixture using the vacuum.Design of the screw geometry widely influences the degree of mixing of PLA with other components and degree of mixing has huge impact on ROP of LA (Fig. 12).
The LA ROP proceeds through the 'coordination-insertion' mechanism by involving the cleavage of the particular oxygen-acyl cyclic ester monomer.It has been reported that the presence of one equimolar Lewis base such as triphenylphosphine along with 2-ethylhexanoic tin (II) salt (Sn(Oct) 2 ), significantly improves the LA polymerization rate.For the production of PLA the optical purity (D and L type) of the reagent lactic acid is crucial because small amounts of enantiomer impurity change the properties such as crystallinity or biodegradation rate of polymerization.Another factor which affects the properties of the polymer is the detection and removal of impurities during reaction because impurities can easily change the intermediate reaction and product [79,28].Dubey et al. [96,97] also revealed the reaction kinetic details of ROP process for PLA formation by considering a combination of the metal catalyst and AE source in the extrusion reaction process.Fig. 13 represents the variation of conversion (X) and number average molecular weight M along the length of the screw (which in a continuous process can be correlated with time for the steady state) obtained with Ludovic® [92,93], taking into account the effect of the polymerization.These results correspond to the simulation of the experiment with these initial conditions: temperature (50-220)°C, AE source (250-600) W, screw speed 300-600 rpm.Similar curves have been generated for each reactive extrusion experiment.The market for biodegradable and common household consumable polymers, such as PET, HIPP and PVC, has been approximately around 200,000 tons in 2005, of which 50% was PLA.The demand for industrial production of biodegradable polymers instead of conventional petrochemical polymers increases many folds after the fact of environmental harm (landfilled) issues related to the petrochemical based polymers highlighted.Sven Jacobsen et al. [26] mentioned in an article mentioned that in the case of plastic waste, US alone produced 35 million tons of consumer polymers in 1998 whereas the corresponding figure of Europe and Asia were 34 million tons and 25 million tons.Since 2009, the total amount of plastic waste due post-consumer plastics which is plastic that can be "recycled" to reuse the material out of which they are made and to reduce the amount of waste going into landfills has been Commercial use of PLA has come into prominence in last 15 years because of its biodegradable and biocompatible nature and extensive application in the field of medical and clinical consumer products [46].Lactic acids are the major component of the food-related application in USA and cover 85% of the commercial products [90].The large production volume requires an economically viable manufacturing process.For large scale the metal-free and degradable commercial production, the continuous polymerization of PLA seems to be an essential and efficient way of production.
Based on quality and consumer requirements, several industrial processing methods such as reactive extrusion, injection moulding, injection stretch blow moulding, blown film, casting, thermoforming, foaming, fibre spinning, blending, batch process and compounding are employed to produce PLA [18,19,22].The selection of initiator system, the catalyst concentration, monomer-to-initiator ratio, polymerization time and temperature change the property of the polymer.Initially, the ROP process of PLA was carried out in a batch process which contains several vessels to mix the initiator and the catalyst with monomer at the certain temperature.The commonly used solvent for the batch process is toluene.In modern age for higher yield and cost effectiveness, PLA production is mostly based on the process of reactive extrusion process through twin screw extruder [9,38,86].
KINETICS AND THEORETICAL MODELLING OF ROP MECHANISM
In literature, several reaction mechanisms were proposed to describe the ROP process.To explain the interdependence of reaction parameters (monomer concentration, temperature, rate constants), reaction mechanisms were formulated by several groups in the form of ordinary differential equations [25,30,32].Different mathematical kinetic models have been proposed, but their number in comparison to experimental/empirical data is significantly less.To verify the experimental results theoretically, different groups adopted different mathematical techniques.
Banu et al. [9] applied the least square method in their mathematical model to verify the reactive extrusion experimental work of ROP of LA using stannous octoate catalyst.The procedure was based on boundary value estimation function ''bvp4c'' using the software MATLAB.
Ryner et al. [68] and Lavan et al. [22] used the hybrid density functional method B3LYP, a quantum chemistry calculation using Gaussian software to calculate the geometries and energies which govern the thermodynamical properties of ROP of LA. Yu, et al. [11,91] numerically solved the rate kinetics equation with the help of method of moments to verify the experimental work using stannous octoate as metallic catalyst.The method of moments is a technique for calculating estimators of the parameters that is based on matching the sample moments with the corresponding distribution moments. Dubey et al. [96] have solved the rate kinetic equations involved in ROP process of LA in the presence of the metal catalyst and AE source by using MATLAB tool. Dubey et al. [97] Revealed the reaction kinetic details of ROP process for PLA formation by considering the combination of the metal catalyst and AE source in the extrusion reaction process, which is one of its own kind of innovative work in this field.
In 2007 Mehta et al. [18] proposed detailed theoretical kinetic model of ROP of LA based on Dubois et al. [8] experimental work and results that were based on the use of aluminium isopropoxide as a catalyst.They formulated the reaction kinetics of RO of PLA in the form of different first order ordinary differential equations.
The ROP of LA consists of three basic stages (eq.1-3) to complete the whole process that can be classified such as initiation, propagation, and termination [8].The reaction mechanism is the following: P + M P (2) (Propagation) Where M, I and P j represent the monomer, initiator, and polymer chain of length j and K 0 , K p and K t represent initiation, propagation and termination rate constants.
The work reported that ROP of LA proceeds through "coordination-insertion" mechanism.They found that the polymerization is normally "living" until a certain molecular weight is reached.Kinetics of the LA polymerization has been investigated at 70°C and it was found to be first order [41].In conclusion, the kinetics law proposed is mentioned below (4). - Where [LA] and [Al(OPr) 3 ] represents the concentration of LA monomer and the catalyst.k j stands for rate constant of the reaction.
To validate their theoretical model based on Dubois et al. [41] they proposed a set of ordinary differential equations (ODE) to consider the role of each reaction parameter in reaction kinetics.
The mass balance equations for batch reactor were written for the above kinetic reaction (eq.1-3) as follows (eq.5-9): [ ] Where: [M], [I] and [P j ] represents the concentration of monomer, initiator and polymer chain of length "j", and k 0 , k j and k t represent the rate constant of initiation, propagation and termination reactions.
Mehta et al. [8] solved the above ODE rate equation by using multiple-step Euler method and verified the model with the help of Dubois et al. [41] experimental report [12].For modelling the process, the maximum polymer chain length was considered to be 5000 repeating units.This was based on observations of experimental results [8].In order to compare the output of the simulation at several monomers to initiator ratio with experimental data, the number average molecular weight (M n ) was calculated using the following expressions (10): The comparison of the variation of number average molecular weight (Mn) versus time is in Fig. 14.
Although the ROP of LA can be carried out using several catalysts, in order to obtain PLA in a continuous process using reactive extrusion, the bulk polymerisation has to be completed in a very short time (few minutes), which would be the residence time of the extrusion process.Therefore, ROP of LA using stannous octoate Sn(Oct) 2 as the catalyst was selected because Sn(Oct) 2 promotes fast LA polymerisation which results in higher molecular weight.However, when Sn(Oct) 2 is used, side reactions (like transesterification and random chain scission reactions) occur even during any other melt processing [7].
In 2009 and later on in 2011 Yu et al. [91] proposed a model which considered Sn(Oct) 2 as a catalyst and 1-dodecanol as co-catalyst at a temperature range between 130 and 180°C to start the ROP of LA monomers [11,91].Different monomer-to-catalyst and cocatalyst-to-catalyst ratio to investigate the proper reaction mechanism of ROP process were performed [11]."Ester interchange" reactions, also called trans-esterification and non-radical random chain scission, were additionally considered to the previously proposed reaction mechanism of C is the catalyst, Sn(Oct) 2 , A is octanoic acid (OctOH) produced by the catalyst, R i represents the active polymer chains with length "i", D i represents the dormant polymer chains with length "i", G j represents the terminated polymer chains with length "j" and M the monomer.
To consider the effect of several reaction parameters on the proposed reaction mechanism, reaction equations 11-19 were formulated into mathematical equations (eq.[20][21][22][23][24][25] which show the variation of several parameters with time.[11]. In order to compare the output of the developed kinetic model at several monomers to initiator ratio with experimental data, Yu et al. [91] calculated the number average molecular weight (Mn) for different initial conditions [11].The variation of Mn versus time (Fig. 15) shows the significance of several stages of reaction mechanism.The initial variation up to 0.01 h shows the dominance of initiation process on rate of reaction.At 0.02h propagation starts and at the saturation point 0.1h shows the termination state of the ROP process [11].The final stage includes all the side reactions, trans-esterification and scission which suffice the termination reaction at the end.
A detailed model of L-LA polymerization at different temperature using Sn(Oct) 2 as the catalyst and 1-dodecanol as cocatalyst was developed.The model considered the effect of inter and intramolecular transesterification reactions.The validation of the model was carried out by comparison with experimental data at the different monomer to catalyst ratio and catalyst to cocatalyst ratio.The reaction responsible for the decrease of molecular weight
CONCLUSION
Synthesis of PLA from LA monomers through ROP process using the metal catalyst is the standard industrial process and it leads to throughputs of up to 20 kg/hr Although PLA can boast of its eco-credentials, it can be toxic.The health and environmental hazards may emanate from traces of the metal catalyst that are left behind in the polymer after the polymerisation of PLA.In order to produce non-toxic PLA, the metal catalyst can be replaced with organic and/or metal-free catalyst.Several studies have explored this possibility, but unfortunately, they obtained low conversion rates and PLA with low molecular weight.Also, the maximum throughput was about 2-3 kg/hr, much lower than the industrially sustainable/commercially viable rate of 20 kg/hr.
For the production of safer and cleaner PLA polymer from lab scale to industrial scale, further investigations will be required which includes large-scale computational simulation too.Research which involves the experimental and theoretical investigation of PLA synthesis considering the non-metal and alternative energy in the reaction seems to be an effective mechanism to focus on.Theoretical modelling and simulation are useful to provide an estimate of the throughput and help to plan the experimental/industrial production accordingly.
Groups
such as InnoREX consortium (www.InnoREX.eu)are performing detailed step by step investigations of above-defined mechanism to achieve highly precise, controlled and large scale synthesis of PLA through the reactive extrusion process.The group is working on considering the impact of replacing the metal catalyst with organic one as well implementation of AE sources in the reaction process.To achieve this target lab scale experiments and mathematical simulation model to verify the output of the reaction are being performed.To consider the impact of AE sources and to study the production quality/market demands of PLA (Medical, Electronic & Food Packaging), several industrial partners also involved in InnoREX group.The InnoREX group is not the only one interested in a safer and notoxic method to prepare PLA.Many experts from industries like Purac and Naturework limited are also trying to develop a novel reactor concept using continuous, highly precise and controlled metal-free polymerisation of PLA.
;
Article no.IRJPAC.27469the variation of the number average molecular weight of PLA with the time scale of 0-150 hours by using the experimental data from Dubois et al.
. The work K10 filler both as the catalyst and inorganic filler to prepare the composite of PLA.OMt-K10 has a catalytic proton, which can act as an acid catalyst for the polymerization process of LA.As rlier by several research groups an increase in temperature during the polymerization process, initiates several side reactions which decrease the molecular weight of 45].Basaran et al. also verified the effect of temperature and C the reaction behaviour started changing and beyond a certain 200°C) the molecular weight started decreasing.Higher temperature 190°C boosts unzipping and chain scission reactions and leads to the decrease of molecular weight as well as thermal degradation 48].However, the basic principle of ROP process is still the backbone for all kind of new development and innovation in this field.
Fig. 4
Fig.4 explains the details of several mechanisms such as coordination-insertion mechanism, 8 represent the mechanism of NHC catalysed ROP and formation of zwitterion.
Fig. 13 .
Fig. 13.(Mn ) ̅ (purple line) vs T & X (red line) vs T obtained with Ludovic® for T (50-220)°C, AE=250 W, 600 rpm speeding in Europe but since 2011 it has remained the same, at about 25.2 million tonnes (2012).77% of total waste was generated by the following seven countries: Germany, UK, France, Italy, Spain, Poland, and the Netherlands while the rest originated from the remaining 22 countries [89].
Fig. 14 .Fig. 15 .
Fig. 14.A comparison of experimental [41] and modelling results (number average molecular weight) for the polymerization of (D, L)-Lactide.The solid lines are the solutions obtained from the model and points are the experimental values | 9,445.8 | 2016-08-09T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Mapping OMIM Disease–Related Variations on Protein Domains Reveals an Association Among Variation Type, Pfam Models, and Disease Classes
Human genome resequencing projects provide an unprecedented amount of data about single-nucleotide variations occurring in protein-coding regions and often leading to observable changes in the covalent structure of gene products. For many of these variations, links to Online Mendelian Inheritance in Man (OMIM) genetic diseases are available and are reported in many databases that are collecting human variation data such as Humsavar. However, the current knowledge on the molecular mechanisms that are leading to diseases is, in many cases, still limited. For understanding the complex mechanisms behind disease insurgence, the identification of putative models, when considering the protein structure and chemico-physical features of the variations, can be useful in many contexts, including early diagnosis and prognosis. In this study, we investigate the occurrence and distribution of human disease–related variations in the context of Pfam domains. The aim of this study is the identification and characterization of Pfam domains that are statistically more likely to be associated with disease-related variations. The study takes into consideration 2,513 human protein sequences with 22,763 disease-related variations. We describe patterns of disease-related variation types in biunivocal relation with Pfam domains, which are likely to be possible markers for linking Pfam domains to OMIM diseases. Furthermore, we take advantage of the specific association between disease-related variation types and Pfam domains for clustering diseases according to the Human Disease Ontology, and we establish a relation among variation types, Pfam domains, and disease classes. We find that Pfam models are specific markers of patterns of variation types and that they can serve to bridge genes, diseases, and disease classes. Data are available as Supplementary Material for 1,670 Pfam models, including 22,763 disease-related variations associated to 3,257 OMIM diseases.
INTRODUCTION
In the last decade, several efforts have been devoted to the problem of functional annotation of protein variants with the aim of relating variations to specific diseases (Vihinen, 2017(Vihinen, , 2018. A collection of variations of genetic diseases is now available, and this prompted the investigation of molecular mechanisms responsible for protein failure (Schaafsma and Vihinen, 2018). Particularly, variations of non-synonymous proteins can promote the change of the active/binding sites and/or protein instability and can hamper protein-protein and ligand-protein interactions (Kucukkal et al., 2015;Ittisoponpisan et al., 2019;Ofoegbu et al., 2019). Molecular mechanisms can be, therefore, different, and different phenotypes may share common molecular mechanisms, independent of the different genes (Deans et al., 2015;Reeb et al., 2016;Babbi et al., 2019, and references therein). Several studies also focused on determining the most frequent protein variants associated with diseases, with the aim of helping functional annotation, starting from variant sequencing (Niroula and Vihinen, 2017;Zeng and Bromberg, 2019).
Different computational methods are available for the functional annotation of variations, based on different approaches. Routinely, given a specific variation, computational methods return with a computed reliability whether the change of a side chain in a protein is disease-related or not (Niroula and Vihinen, 2016).
An interesting aspect of disease-related protein variants is the protein instability promoted by the variations (Casadio et al., 2011;Savojardo et al., 2019, and references therein). Protein instability may be related to a disease, with this not being the only reason. For functional annotation of disease-related variations, routinely, the chemico-physical properties of the variation and the effect of the variation on the close environment in the protein structure are taken into consideration. It appears that the correlation among the strength of association to disease and the strength of association to the protein structure perturbation is moderate (Savojardo et al., 2019).
The problem of which phenotype is associated with a given variation or a set of variations has been scarcely addressed, and it remains unanswered, given the complexity of the scenario relating phenotypes to variations. Existing databases can relate genes to diseases and/or variations to diseases (MalaCards 1 , Rappaport et al., 2017;GeneCards 2 , Stelzer et al., 2016;DisGeNet 3 , Piñero et al., 2020;eDGAR 4 , Babbi et al., 2017;Humsavar 5 , UniProt Consortium, 2019;OMIM 6 , Amberger et al., 2015).
Protein domains have been adopted to explore associations between genes and human-inherited diseases (Zhang et al., 2011(Zhang et al., , 2016Yates and Sternberg, 2013;Wiel et al., 2017Wiel et al., , 2019. Models of protein domains are available in the Pfam database 7 (El-Gebali et al., 2019), and they enable the clustering of proteins into protein families, each represented by multiple sequence alignments, mainly based on protein structural alignments and cast into hidden Markov models (HMMs). Initially, similarities of disease phenotypes were exploited within a given domaindomain interaction network, and a Bayesian approach was proposed to prioritize candidate domains for human complex diseases (Zhang et al., 2011). Then, domain-disease associations were inferred from domain-protein, protein-disease, and disease-disease relationships (Zhang et al., 2016). In these studies, the bottom layer of variations in proteins, detected in large-scale sequencing experiments, was not taken into consideration, restraining the analysis only to the already known protein-or gene-disease associations. More recently (Wiel et al., 2017), with the notion of homologous domains in proteins, variants were aggregated to improve their interpretation, and a web server (MetaDome 8 , Wiel et al., 2019) was made available for the pathogenicity analysis of genetic variants.
In a previous study (Savojardo et al., 2019), we introduced the notion of variation type, in order to take the physicochemical properties of the variations into account as well (Casadio et al., 2011). After mapping genetic disease-related variations on a restricted set of human protein threedimensional (3D) structures, we found that the distribution of disease variation types significantly varies across different structural/functional Pfam models.
In this study, relying on the relationship between genes and phenotypes, we ask the question as to which extent possible patterns of variation types framed into Pfam domains are significant for a reliable association to specific groups of maladies.
Dataset Construction
The dataset adopted in this study was derived from the Humsavar database 5 release 2020_04 of August 2, 2020, listing all missense variants annotated in human UniProtKB/Swiss-Prot (UniProt Consortium, 2019) entries.
From the initial set of proteins included in the database, we only selected those reporting at least one variant implicated in the disease, excluding proteins reporting only polymorphisms not associated with disease insurgence. Moreover, any variation labeled as "unclassified" (i.e., with uncertain implications in disease) was filtered out. Finally, we only retained disease-related variations associated with a genetic disorder reported in the Online Mendelian Inheritance in Man (OMIM) catalog 9 .
The set of neutral variations was extended using data retrieved from the GnomAD database (exome version 2.1.1) (Karczewski et al., 2020). Only variations occurring in our set of proteins, not already included in Humsavar and with clinical significance labeled as "Benign/Likely benign" by ClinVar (release 2021-03-23) (Landrum et al., 2020), were retained.
Pfam (El-Gebali et al., 2019) annotations were retrieved from the Pfam-A region annotation file for Homo sapiens version 33.1 obtained via the Pfam FTP server 10 . From all the annotations available, we only retained those occurring at proteins included in our set of data and covering at least one disease-related variation.
Mapping OMIMs to Disease Ontology
The DO (Human Disease Ontology) OBO (Open Biological and Biomedical Ontology) file release of September 15, 2020, was downloaded 11 and used directly to retrieve annotations for each OMIM disease by means of cross-references. Each retrieved leaf DO term associated to a single OMIM was expanded up to the ontology root term, including all ancestors. Term expansion was computed using an ad-hoc script to parse the OBO file.
Computing the Disease Score
For each Pfam domain, we estimated a propensity score for the association to the disease as follows: are the number of disease-related and polymorphism variations in the domain pfam, while N d and N p are the same numbers in the whole dataset. In the dataset, scores range from 1.40 down to 0.03.
Kullback-Leibler Divergence Between Distributions
Differences between probability distributions were evaluated using the Kullback-Leibler divergence: where p and q are two discrete probability distributions defined on the same probability space X.
A Dataset of Variations With Annotated Pfam
Overall, our dataset comprises 50,746 variations occurring in 2,959 proteins implicated in 3,884 genetic disorders. Diseaserelated variations in these proteins are 29,949, accounting for 55% of the total variations. The remaining 20,797 variations are neutral (45%). Table 1 shows summary statistics about the dataset analyzed in this study.
Restricting the set of proteins to those having Pfam entries covering at least one disease-related variation, we ended up with 2,513 proteins (corresponding to 85% of the initial protein set) implicated in 3,257 distinct genetic diseases. Overall, 1,670 distinct Pfam entries were annotated on these proteins. A subset of 548 out of 1,670 Pfams occurs in two or more proteins in the set. The vast majority (96%) of Pfam entries are of type "Domain" or "Family, " while a very small fraction accounts for "Repeat, " "Coiled-coil, " "Motif, " and "Disordered" types.
Data shown in Table 1 clearly indicate that the incidence of disease-related variations within Pfam domains is significantly higher than the background (71% against 55%).
Overall Pfam Association With Disease
We were interested in elucidating the overall association between Pfam and OMIM diseases. For each entry in the set of 1,670 Pfam domains in our dataset, we computed the score for the association to disease with the formula reported in Eq. 1. A value greater than 1 for this ratio highlights a higher abundance of disease variations in the Pfams than in the background. The complete result of this analysis is reported in Supplementary Table 1 for all the 1,670 Pfam entries. About 48% of Pfam entries have a value greater than 1, as a consequence of the overall propensity of disease-related variations to be located within Pfam domains. In general, the distribution of scores is not random and reflects a differential disease association for the different Pfam entries.
In Table 2, we list the result for the 20 highest scoring Pfams covering 10 or more proteins. Scores with corrected p-values (Supplementary Table 2) equal to or lower than 0.1 are highlighted (top scoring Pfams are all significant at 0.1 level). Significance does not hold for some Pfams covering only few variations. In these cases, more data are needed in order to properly evaluate the association to the disease.
Interestingly, Pfam entries reported in Table 2 can be grouped into few functional classes, including DNA-binding domains (accounting for eight domains/families), transmembrane domains (three), and enzymes (three).
Pfams Have Distinctive Patterns of Disease Variation Types
Going a step further in the analysis, we investigated the composition of disease-related variations occurring in different Pfam domains. In a previous study (Savojardo et al., 2019), the same analysis was performed on a small dataset of highly curated variations covered by 3D structures from Protein Data Bank (PDB). In this study, we extended and complemented the previous results using a larger dataset of Pfam domains and variations. To this aim, we first grouped residues according to their physico-chemical properties, obtaining four major groups, namely, apolar (GAVPLIM), aromatic (FWY), polar (STCNQH), and charged (DEKR) residues. We define a variation type in relation to the conservation or substitution of apolar (a), polar (p), aromatic (r), and charged (c) (Figure 1). Then, we computed Pfam-specific distributions of disease-related variations involving substitutions from one group to another (overall, 16 different substitution types are possible). Complete results are reported in Supplementary Table 3 for all the 1,670 Pfam domains.
In Figure 1, we show a heatmap reporting the frequencies of each substitution type for the 20 highest scoring Pfam entries described in the previous section and mostly associated with diseases. For each Pfam entry, we report the Pfam ID, the name, and two numbers in parentheses, indicating the number of proteins and disease-related variations covered by the specific Pfam. For comparison, the last row reports the overall distribution of substitution types computed on the whole set of variation types covered by Pfams.
The results shown in the heatmap of Figure 1 indicate that the different Pfams are enriched in different variation types and that each Pfam shows a differential pattern with respect to the background. Interestingly, in some cases, the pattern of enriched variation types can be related with the overall function of the Pfam domain and/or the cellular context in which the domain/s are presumably operating.
In Figure 2, we report three examples, namely, a selection of DNA-binding domains, growth factors, and transmembrane domains. For DNA-binding domains, we observe a higher concentration of disease-related variations involving a substitution from a charged residue to any different residue type. Contrarily, for growth factor domains, we observe abundant variations involving substitutions from polar to any type of the residue, while transmembrane domains are mostly enriched in substitutions involving apolar wild types. These observations clarify a general trend, pointing to the specificity of the disease variation type per Pfams of functional classes.
From data analysis, we conclude that the distribution of the disease-related variation type patterns observed for the different Pfams is non-random and different from the background distribution (computed considering all the diseaserelated variation types occurring in Pfams). This observation confirms our previous results obtained with a smaller number of Pfam domains, directly related to human protein structures, and corroborates the notion that distinctive patterns of diseaserelated variation types are Pfam specific (Savojardo et al., 2019).
Linking the Pfam to Disease Ontology
As a final step of our investigation, we searched for a link between Pfam domains and disease ontology. Disease classification is not a trivial task. Different controlled vocabularies and ontologies FIGURE 1 | The heatmap reporting the frequency of each variation type as observed within the 20 Pfam entries mostly associated with diseases. For each Pfam, the numbers within parentheses indicate the number of proteins and disease-related variations covered. In variation types, labels are as follows: a, apolar; r, aromatic; p, polar; and c, charged. Mean and median Kullback-Leibler divergences (Eq. 2) between individual Pfam distributions and the background are 2.1 and 2.1 bits, respectively.
such as the Human Phenotype Ontology (HPO) 12 (Köhler et al., 2019) or the DO (Schriml et al., 2019) are available for this purpose. However, none of the ontologies provides a full coverage of the entire space of OMIM diseases, ranging from 82% coverage of HPO to 74% of DO. Moreover, ontologies like HPO are not specifically designed to describe a disease. Instead, they are devised to describe clinically relevant phenotypes. In the current study, we used the DO ontology because, in spite of a slightly lower coverage, it provides a better and less ambiguous classification of diseases. 12 https://hpo.jax.org/app/ To obtain a high-level disease classification, we collected all the 3,257 OMIM diseases linked to variations occurring in our 1,670 Pfam domains and mapped them to a set of 17 first-level DO terms. These include 12 terms describing diseases affecting anatomical entities (all child terms of "DOID:7 -disease of anatomical entity" like cardiovascular, endocrine, gastrointestinal, etc.), cellular proliferation diseases (DOID:14566), mental health diseases (DOID:150), metabolic diseases (DOID:0014667), physical disorders (DOID:0080015), and syndromes (DOID:225). We were able to map 2,454 out of 3,257 OMIMs to at least one of the above DO terms. On average, each OMIM was mapped to 1.01 DO, FIGURE 2 | The heatmap reporting the frequency of each variation type as observed within a selection of (A) DNA-binding, (B) growth factor, and (C) transmembrane domains. For each Pfam, the numbers within parentheses indicate the number of proteins and disease-related variations covered. In variation types, labels are as follows: a, apolar; r, aromatic; p, polar; and c, charged.
providing an almost strict classification of each OMIM into a single DO term.
With this mapping, we computed a Pfam-specific distribution of DO-associated disease classes. Complete results are reported in Supplementary Table 4 for all the 1,670 Pfam entries considered in this study. The data provided in this study indicate that disease classes are not evenly distributed among different Pfam domains, again suggesting a differentiated association between the Pfam and phenotypes.
In Figure 3, we show an extract of our analysis, focusing on the 20 highest scoring Pfam domains associated with diseases. The heatmap reports, for each Pfam, the frequency of disease types (in the 17 different classes detailed above) as retrieved from OMIMs associated with substitutions occurring on the specific Pfam. In brackets, close to each Pfam name, we list the number of proteins, disease variations, and OMIMs associated to the Pfam.
Even in this case, the distributions of disease classes appear to be very different from the background (reported in the last row of the heatmap). Remarkably, the aggregation of Pfams into more general functional classes provides an additional level of interpretation. Considering Figure 3, we can observe that DNAbinding domains are mostly associated with syndromes, nervous system, and endocrine system disease classes, while enzymes are mostly involved in the metabolic disease class. Transmembrane domains show the prevalence of nervous and integumentary disease classes, while growth factors and actin-binding domains are enriched in musculoskeletal diseases. Finally, signaling Pfam domains are prominently associated with immune system diseases. Overall, many of these findings are in line with what we expected. Protein domains have different functions and are involved into different biological processes. Variations occurring in these domains, when disruptive, lead to diseases that are connected to the biological processes in which the proteins are mainly involved. For instance, the fact that variations occurring in transmembrane domains are often linked to neurological diseases is a direct consequence of the involvement of transmembrane proteins (among other functions) in neurotransmission. Similarly, variations in enzymes routinely lead to metabolic diseases.
Some of the Pfams reported in Figure 3 are associated to more than one disease types. For example, diseases that are associated to the Forkhead domain (PF00250) are distributed into five classes, namely, nervous, mental, endocrine, immune diseases, and syndromes. In Figure 4, an additional heatmap is shown trying to link the disease types to the patterns of variation types. Specifically, the patterns of variation types are reported after isolating variations linked to OMIMs in the different disease classes. Interestingly, the patterns show an evident difference among each other. This confirms the level of association that links domains to variation types and diseases.
CONCLUSION AND PERSPECTIVES
In this study, we consider, for the time being, only diseases of genetic origins, with the belief that cancer-related somatic variations are as yet not satisfactorily clustered according to tissue specificity of the plague.
This study, as well as the previous ones (Yates and Sternberg, 2013;Wiel et al., 2017Wiel et al., , 2019, aims at establishing a direct mapping among variations, diseases, and phenotypes via the protein domains. Our novelty is the introduction of the variation type as a distinguished feature of association to the Pfam domain and to the phenotype. Our findings complement previous ones (Wiel et al., 2017) with the inclusion of the variation type, which adds to the classification of variations and their impact on the protein function, stability, and interaction in the specific context where the gene is active.
The link among the variation type, Pfam domain, and phenotype can greatly reduce the number of possible steps to understand which variations are disease-related or which are not and which phenotype they may promote. In perspective, the association among the variation type, protein domain/s, and phenotype may greatly simplify the problem of genetic variant annotation.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. | 4,551.4 | 2021-05-07T00:00:00.000 | [
"Biology"
] |
Genetic Diversity and Evolutionary Analyses Reveal the Powdery Mildew Resistance Gene Pm21 Undergoing Diversifying Selection
Wheat powdery mildew caused by Blumeria graminis f. sp. tritici (Bgt) is a devastating disease that threatens wheat production and yield worldwide. The powdery mildew resistance gene Pm21, originating from wheat wild relative Dasypyrum villosum, encodes a coiled-coil, nucleotide-binding site, leucine-rich repeat (CC-NBS-LRR) protein and confers broad-spectrum resistance to wheat powdery mildew. In the present study, we isolated 73 Pm21 alleles from different powdery mildew-resistant D. villosum accessions, among which, 38 alleles were non-redundant. Sequence analysis identified seven minor insertion-deletion (InDel) polymorphisms and 400 single nucleotide polymorphisms (SNPs) among the 38 non-redundant Pm21 alleles. The nucleotide diversity of the LRR domain was significantly higher than those of the CC and NB-ARC domains. Further evolutionary analysis indicated that the solvent-exposed LRR residues of Pm21 alleles had undergone diversifying selection (dN/dS = 3.19734). In addition, eight LRR motifs and four amino acid sites in the LRR domain were also experienced positive selection, indicating that these motifs and sites play critical roles in resistance specificity. The phylogenetic tree showed that 38 Pm21 alleles were divided into seven classes. Classes A (including original Pm21), B and C were the major classes, including 26 alleles (68.4%). We also identified three non-functional Pm21 alleles from four susceptible homozygous D. villosum lines (DvSus-1 to DvSus-4) and two susceptible wheat-D. villosum chromosome addition lines (DA6V#1 and DA6V#3). The genetic variations of non-functional Pm21 alleles involved point mutation, deletion and insertion, respectively. The results also showed that the non-functional Pm21 alleles in the two chromosome addition lines both came from the susceptible donors of D. villosum. This study gives a new insight into the evolutionary characteristics of Pm21 alleles and discusses how to sustainably utilize Pm21 in wheat production. This study also reveals the sequence variants and origins of non-functional Pm21 alleles in D. villosum populations.
Pm21 was originally transferred from an accession of D. villosum, collected from Cambridge Botanical Garden, United Kingdom, to durum wheat (T. turgidum var. durum L.), and then a translocation line of wheat-D. villosum T6AL·6VS carrying Pm21 was further developed (Chen et al., 1995). Using this translocation line as the powdery mildew resistance source, more than 20 varieties have been developed and released in the middle and lower reaches of the Yangtze River Valley and the southwest wheat-producing area, the most rampant areas of powdery mildew in China, where some Pm genes, such as Pm2a and Pm4a, are gradually losing their resistance (Bie et al., 2015a).
Undoubtedly, Pm21 is a very valuable gene that confers highly effective resistance to tested isolates of Blumeria graminis f. sp. tritici (Bgt). However, no recombination occurs between the alien chromosome arm 6VS carrying Pm21 and the wheat homoeologous chromosome arms, which limits the genetic mapping and the cloning of Pm21 in the wheat backgrounds . Recently, four seedling-susceptible D. villosum lines were identified from the natural populations. Based on the fine genetic map constructed, the gene Pm21 was cloned and confirmed to encode a single coiled-coil, nucleotide-binding site, leucine-rich repeat (CC-NBS-LRR) protein (He et al., 2017. In the present study, we isolated the Pm21 alleles from different resistant D. villosum accessions and determined their genetic diversity, non-synonymous and synonymous substitution rates and positive selection sites. On the other hand, D. villosum germplasms susceptible to powdery mildew are rare, and only four susceptible D. villosum lines (DvSus-1 to DvSus-4) and two wheat-D. villosum chromosome 6V disomic addition lines (DA6V#1 and DA6V#3) were identified (Qi et al., 1998;Liu et al., 2011;He et al., 2017). Understanding the reason that these D. villosum germplasms keep or lose their resistance to powdery mildew will be useful to extend the effective duration of Pm21 in agriculture. We also detected the sequence variations of Pm21 alleles in the above germplasms for tracing their origins in natural population of D. villosum.
Plant Materials
Dasypyrum villosum accessions were gifted from Germplasm Resources Information Network (GRIN), GRIN Czech, Genebank Information System of the IPK Gatersleben (GBIS-IPK), and Nordic Genetic Resource Center (NordGen). The wheat-D. villosum chromosome 6V disomic addition lines DA6V#1 and DA6V#3 were provided by GRIN and Dr. Bernd Friebe (Kansas State University, Manhattan, KS, USA), respectively ( Table S1). The D. villosum line DvRes-1 carries the original Pm21 gene. DvRes-2 and DvRes-3 were derived from the powdery mildew resistant individuals of the accessions GRA961 and GRA1114, respectively. Lines DvSus-1 to DvSus-4 were derived from the susceptible individuals of the accessions GRA2738, GRA962, GRA1105, and PI 598390, respectively. The wheat variety (cv.) Yangmai 18 was a wheat-D. villosum translocation line that carries Pm21. The wheat cv. Yangmai 9 was susceptible to powdery mildew. Both of them were developed in Yangzhou Academy of Agricultural Sciences, Yangzhou, China. Plants were grown under a daily cycle of 16 h of light and 8 h of darkness at 24 • C in a greenhouse.
Evaluation of Powdery Mildew Resistance
Blumeria graminis f. sp. tritici (Bgt) isolate YZ01 is a virulent isolate collected from Yangzhou region (Jiangsu Province, China). All plants, D. villosum accessions or lines and wheat varieties, were inoculated with Bgt isolate YZ01 at one-leaf stage (He et al., 2016). The powdery mildew responses of plants were evaluated at 8 d after inoculation.
DNA Isolation and Molecular Analysis of Pm21 Alleles
Genomic DNA was extracted from leaves of one-leaf-stage plants by the TE-boiling method (He et al., 2017). The marker MBH1, developed from the promoter region of Pm21 gene (Bie et al., 2015b), was used to detect genetic diversity of different D. villosum individuals. PCR amplification was carried out according to our previous description (He et al., 2017). PCR products with different sizes were T/A-cloned and sequenced.
Isolation of Pm21 Alleles
Total RNA of different D. villosum accessions/lines and wheat materials was extracted from seedlings leaves using the TRIzol solution (Life Technologies, Carlsbad, California, USA). About 2 µg of total RNA was used for synthesis of cDNA using the PrimeScript TM II 1st Strand cDNA Synthesis Kit (TaKaRa, Shiga, Japan) according to the manufacturer's guidelines. Pm21 alleles were isolated from the cDNAs by PCR using the high fidelity PrimeSTAR Max Premix (TaKaRa, Shiga, Japan) and the primer pair (forward primer: 5 ′ -TTACCCGGGCTCACCCGTTGGACTTGGACT-3 ′ ; reverse primer: 5 ′ -CCCACTAGTCTCTCTTCGTTACATAATGTA GTGCCT-3 ′ ). PCR products were digested with SmaI and SpeI, inserted into pAHC25-MCS1 and sequenced. The genomic DNA of the alleles in the susceptible materials, DvSus-1 to DvSus-4, DA6V#1, and DA6V#3, were also isolated using PCR with LA Taq DNA polymerase (TaKaRa, Shiga, Japan) and the above primer pair. Each Pm21 allele was amplified from its donor material by three independent PCR, followed by cloning and Sanger sequencing.
Sequence Data Analysis
Multiple alignment analysis was carried out using the CLUSTAL W tool (Thompson et al., 1994). Nucleotide diversity of Pm21 alleles and their coding sequences of different domains or nondomain regions was analyzed using the MEGA7 software (Kumar et al., 2016) and assessed by Tajima's test of neutrality (Tajima, 1989). π meant the average number of nucleotide differences per site between two sequences. θ represented Watterson's nucleotide diversity estimator based on the value of π. Synonymous substitution rate (dS), non-synonymous substitution rate (dN), and natural selection for each codon were estimated by the HyPhy program in the MEGA7 software. Sequence logos of LRR motifs were created by the WebLogo tool (Crooks et al., 2004). For evolutionary analyses, all positions containing gaps were eliminated. So, there were a total of 2,718 positions in the final dataset. A phylogenetic tree based on the cDNA sequences of the Pm21 alleles was constructed using the Neighbor-Joining method in the MEGA7 software (Kumar et al., 2016).
Powdery Mildew Responses of Different Germplasms
The D. villosum accessions provided by different germplasm resource institutions were collected from the Mediterranean region, mainly from Greece and Italy (Figure 1; Table S1). A total of 62 accessions were used to detect the responses to Bgt isolate YZ01. All plants of the 58 accessions were immune to Bgt isolate YZ01, whereas in each of the other four accessions (GRA2738, GRA962, GRA1105, and PI 598390), several individuals (2-5%) were susceptible despite that most plants were resistant. The four susceptible homozygous lines derived from the above accessions were then designated as DvSus-1 to DvSus-4, respectively. The results also showed that the wheat-D. villosum chromosome 6V disomic addition lines DA6V#1 and DA6V#3 were susceptible to powdery mildew (Figure 2).
Molecular and Nucleotide Diversity of the Pm21 Alleles
To understand the diversity at the Pm21 loci, MBH1, designed based on the promoter sequence of Pm21 (Bie et al., 2015b), was used to detect the resistant individuals from 62 different D. villosum accessions. The PCR products were sequenced and eight representative bands with different sizes, 271, 339, 340, 341, 342, 344, 396, and 467 bp, were found. This indicated that insertion-deletion (InDel) polymorphisms exist at the promoter regions of different Pm21 alleles. Given that all MBH1 sequences were isolated from resistant individuals, it was suggested that the variations in the promoter regions have no obviously adverse impact on the expression of Pm21 alleles. In some individuals, two specific DNA bands were observed (Figures S1, S2), suggesting that these individuals might be heterozygous at the Pm21 loci.
We then isolated Pm21 alleles from the resistant individuals of 62 D. villosum accessions. Each of the individuals of 52 accessions had one copy of Pm21 allele. However, due to open pollination of D. villosum species, each of the tested individuals of 9 accessions (PI 368886, W619414, W67270, GRA960, GRA1109, GRA1114, GRA2711, GRA2716, and 01C2300013) had two copies of Pm21 alleles. In addition, three different alleles were, respectively, isolated from three individuals of the accession PI 251478. As a result, a total of 73 Pm21 alleles were isolated in this study (Table S1). Among them, 38 alleles were nonredundant, sharing 91.7-100% identities with each other. In general, a total of seven InDels (Table S2), including three 3bp insertions, one 30-bp insertion and three 3-bp deletions, and 400 single nucleotide polymorphism (SNP) sites were identified among these alleles. The 38 non-redundant Pm21 alleles and their coding sequences of different domains were further used to determine the nucleotide diversity. The average pairwise nucleotide diversity π and Watterson's nucleotide diversity estimator θ of the full-length Pm21 alleles were 0.039096 and 0.035027, respectively. Compared with the full-length alleles, the values of π and θ of the NB-ARC domain-encoding sequences were slightly lower (π = 0.036868 and θ = 0.034204), whereas those of the CC domain-encoding sequences were significantly lower (π = 0.013115 and θ = 0.012973) and those of the LRR domain-encoding sequences were obviously higher Yangmai 9 were used as the controls. Line DvRes-1 carries Pm21. Lines DvRes-2 and DvRes-3, carrying Pm21-C4 and Pm21-G2, were the resistant individuals of the accessions GRA961 and GRA1114, respectively. Lines DvSus-1 to DvSus-4, carrying non-functional Pm21 genes, were the susceptible individuals in the accessions GRA2738, GRA962, GRA1105, and PI598390, respectively.
(π = 0.051892 and θ = 0.044652). These results indicated that the CC domain was more conserved than other domains whereas the LRR domain was more variable. We also analyzed the π and θ values of Linker 1 and Linker 2, the regions between the CC and NB-ARC domains, and between the NB-ARC and LRR domains, respectively. The data showed that Linker 1 had no nucleotide diversity. Contrarily, Linker 2 had the highest nucleotide diversity (π = 0.054507 and θ = 0.054092) in different domains or regions of Pm21 alleles (Figure 3; Table 1). Up to now, the function of Linker 2 is unclear yet. One reasonable explanation for its high variation is that Linker 2 may be an extension of the LRR domain.
Selection Pressure Analysis
To determine the potential evolutionary selection occurred in Pm21 alleles, dN and dS rates were assessed using the HyPhy program. The dN/dS ratio of full-length Pm21, CC-, NB-ARC-, and LRR-encoding sequences were 0.72046, 0.22671, 0.48723, and 1.15098, respectively, which suggested that the LRR domain might be under positive selection. The dN/dS ratio of the structural LRR residues and the solvent-exposed LRR residues, the two parts of the LRR domain, were 0.88106 and 3.19734, respectively ( Table 2). This indicated diversifying selection acting on the solvent-exposed residues in the LRR domain of Pm21 alleles. The LRR domain of Pm21 consists of 16 LRR motifs. The dN/dS ratios of 8 LRR motifs (LRR4-LRR7, LRR10, LRR11, LRR15, and LRR16) were greater than 1. Among them, the dN/dS ratio of LRR11 was 8.58259 and that of LRR16 was infinite because its dS value was zero (Figure 4; Table 2). These results Table S3).
Phylogenetic Analysis and Classification of the Pm21 Alleles
The phylogenetic tree for Pm21 alleles showed that 38 nonredundant Pm21 alleles were clustered into seven clades (Clade A to G). Among these clades, Clades A, B, and C were the major types in the D. villosum populations, which included 26 members, accounting for 68.4% (Figure 5).
According to the clades categorized in the phylogenetic tree, the Pm21 alleles isolated from the resistant D. villosum accessions were correspondingly divided into seven classes (Class A to G). Class A consisted of 9 alleles, Pm21-A1 to Pm21-A9, whose open reading frames (ORFs) were 2,730 bp in length sharing the highest identities with Pm21 (99.2% on average). Class B contained 10 alleles, Pm21-B1 to Pm21-B10, most of which were 2,724 bp sharing 96.6% identity with Pm21 on average. Class C harbored 7 alleles, Pm21-C1 to Pm21-C7, with 2,730 bp in length and had 96.7% identity with Pm21 on average. The remaining 12 alleles, sharing 92.1-97.0% identities with Pm21, were divided into four classes, Class D to G, whose obvious sequence characteristics was a 30-bp insertion compared with Pm21 (Table 3).
Natural Variations of Pm21 Alleles in Susceptible Germplasms
To test the rare natural variations leading to lose of resistance to powdery mildew, we isolated Pm21 alleles from the susceptible D. villosum lines DvSus-1 to DvSus-4, derived from the accessions GRA2738, GRA962, GRA1105, and PI 598390, respectively. The non-functional allele Pm21-NF1 isolated from the genome of DvSus-1 was 3,699 bp in length, whose ORF was 2,730 bp. Compared with Pm21, Pm21-NF1 had 98 SNPs; however, compared with the 38 non-redundant alleles isolated from the resistant D. villosum accessions, Pm21-NF1 only had two specific variations. The first variation was a transversion G61T leading to the amino acid change A21S in the CC domain. The second variation was a transition A821G resulting in the change D274G (Figure S3A), corresponding to the latter aspartate (D) in kinase-2 motif (also called Walker B motif; consensus sequence: LLVLDDVW) in the NB-ARC domain. The latter D is considered to act as the catalytic site for ATP hydrolysis and activation of disease resistance protein (Meyers et al., 1999;Tameling et al., 2006). Here, bioinformatic analysis showed that the latter D was highly conserved in all the tested disease resistance proteins from Arabidopsis thaliana, barley (Hordeum vulgare L.) and wheat (Figure S4), suggesting that the amino acid change D274G might lead to loss-of-function of Pm21-NF1.
The genomic sequence of the non-functional allele Pm21-NF2 isolated from the susceptible DvSus-2 was 3,698 bp in length, whose ORF contained a 1-bp deletion after position 876, leading to frame shift and resulting in a truncated protein (296 aa). The variations of Pm21 alleles isolated from DvSus-3 and DA6V#3 were both identical to that of Pm21-NF2. In DvSus-4 and DA6V#1, the sequences of the alleles were identical (4,988 bp) and designated as Pm21-NF3. Pm21-NF3 harbored an insertion of 1281 bp that caused a premature stop codon ( Figure S3B) and led to loss of the last four LRR motifs. These results suggested that the non-functional Pm21 alleles in DA6V#1 and DA6V#3 both directly originated from their D. villosum donors susceptible to powdery mildew.
Diversity, Classification and Geographic Distribution of Pm21 Alleles
As a wild relative of wheat, D. villosum possesses several powdery mildew resistance genes that have important potential for controlling wheat powdery mildew disease (He et al., 2017). Among them, Pm21 and PmV, located on chromosome 6VS derived from different D. villosum accessions, confer powdery FIGURE 4 | Sequence logos of 16 LRR motifs encoded by Pm21 alleles. In the LxxLxLxx motifs, x represents the predicted solvent-exposed LRR residues, and L represents a leucine or another aliphatic amino acid residue. The sites at positions 628, 885, 903, and 905 pointed by arrows are predicted to be under positive selection. mildew resistance at whole-plant growth stages. It seems that Pm21 and PmV may be allelic (Bie et al., 2015b). Both Pm55 and Pm62 confer resistance at adult-plant stage but not at the seedling stage (Zhang et al., 2016(Zhang et al., , 2018. In this study, Bgtresponses of all D. villosum accessions were detected at one-leaf stage, which could exclude the resistance conferred by Pm55 and Pm62. Therefore, the seedling-resistance in these materials was considered to be provided by Pm21 alleles.
Recently, the broad-spectrum powdery mildew resistance gene Pm21 was isolated from D. villosum using the map-based cloning strategy . Based on the investigation of powdery mildew responses of different D. villosum accessions collected from the Mediterranean countries, we isolated 73 Pm21-like sequences from the resistant individuals. The previous work showed that Pm21 is adjacent to another CC-NBS-LRRencoding gene DvRGA1 . Although DvRGA1 is the highest matched gene of Pm21 in Genbank database, they had only 72.7% nucleotide sequence identity. Here, the isolated Pm21-like genes shared 91.7-100% identities with each other, indicating that all the sequences are identical or allelic to Pm21. Of the 73 sequences, 38 were different from each other.
Compared with Pm21, the other 37 non-redundant alleles have seven InDels involved in 3-bp, 6-bp, 30-bp, 33-bp, or 36-bp, which make the alleles maintain correct ORFs and encode fulllength proteins. The alleles also had many SNPs and the average pairwise nucleotide diversity of the LRR-encoding region was significantly higher than those of the CC-or NB-ARC-encoding regions. Compared with other domains, the LRR domain were supposed to have undergone faster evolution. Because all of the individuals containing these alleles were still effective against the highly virulent Bgt isolate YZ01, it was proposed that the wide variations of Pm21 alleles have no obviously adverse effect on the disease resistance. However, whether they still keep broadspectrum resistance remains to be disclosed.
Phylogenetic analysis identified seven independent clades that involved all the Pm21 alleles. Among them, Classes A to C represented the three major classes. The functional Pm21 gene was originally found in an accession provided by Cambridge Botanic Garden in the United Kingdom, but the exact collection site of this accession was unclear. Pm21, with the systemic name Pm21-A1 here, belongs to Class A whose members were only found in the accessions of Greece or Turkey. In particular, among the six isolated sequences identical to Pm21, five came from independent Greece accessions and one from a Turkey accession. Therefore, based on the present data, it was proposed that the original D. villosum donor of Pm21 might come from Greece or Turkey.
Geographic distributions of different Pm21 alleles were further investigated in this study. It is indicated that the Pm21 alleles isolated from Greece D. villosum accessions had more genetic diversity and covered the most members of all the seven classes (Class A to G). In addition, Pm21-A8, Pm21-E2, and Pm21-F3 were only detected in Turkey accessions, and Pm21-B7 and Pm21-G2 were only detected in Italy accessions ( Table S1). The characteristics of geographic distributions of the Pm21 alleles may help to search the accessions carrying specific Pm21 alleles as donors for future breeding purpose.
Variations and Origins of Non-functional Pm21 Alleles in Susceptible D. villosum Lines and Wheat Genetic Stocks
It has been believed that D. villosum resources are all resistant to wheat powdery mildew (Qi et al., 1998). In our previous work, four D. villosum lines DvSus-1 to DvSus-4 susceptible to powdery mildew were identified from different accessions of D. villosum, which made it possible to clone Pm21 using the mapbased cloning strategy (He et al., 2017. In this study, we demonstrated that the variations of Pm21 alleles, Pm21-NF1 to Pm21-NF3, isolated from the four susceptible D. villosum lines, involved point mutation, deletion and insertion, respectively. Among them, Pm21-NF1 had an important amino acid change (D274G) in the highly conserved kinase-2 motif of the NB-ARC domain that might hamper the function of ATP hydrolysis (Meyers et al., 1999;Tameling et al., 2006), while Pm21-NF2 and Pm21-NF3 both encoded truncated proteins caused by premature stop codons.
Previously, the wheat-D. villosum chromosome 6V disomic addition lines DA6V#1 and DA6V#3 were reported to be highly susceptible to powdery mildew (Qi et al., 1998;Liu et al., 2011). During the creation of the two addition lines, colchicine was used for chromosome doubling, which is proved to be an effective mutagen in fact (Gilbert and Patterson, 1965). So, researchers did not know if the susceptibilities of DA6V#1 and DA6V#3 came from colchicine treatment or the D. villosum donors. Since Pm21 has been cloned, through sequencing of allele genes here, we demonstrated that Pm21 alleles isolated from DA6V#1 and DA6V#3 had identical variations to Pm21-NF3 (DvSus-4) and Pm21-NF2 (DvSus-2 and DvSus-3), respectively. Therefore, it was suggested that the variations of the Pm21 alleles from DA6V#1 and DA6V#3 both originated from their D. villosum donors, rather than colchicine treatment.
The non-functional alleles, Pm21-NF1, Pm21-NF2, and Pm21-NF3, were found in the accessions GRA2738, GRA962, PI 598390, respectively. In theory, their wild-type genes could be isolated from the corresponding accessions above. We tried to do so but not succeeded. The major reason may be that D. villosum is highly outcrossing which causes that the pollen with a mutated gene is subject to separate from the one carrying a corresponding wild-type gene. Therefore, we attempted to trace the origins of the non-functional alleles through evolutionary analysis. The origins of the two non-functional alleles, Pm21-NF2 and Pm21-NF3, were both traceable in the natural populations of D. villosum. Except the identified mutations, the sequences of Pm21-NF2 and Pm21-NF3 were entirely identical to those of Pm21-C4 and Pm21-G2 that were cloned from the resistant individuals of the accessions GRA961 and GRA1114, respectively. Hence, we concluded that the non-functional alleles Pm21-NF2 and Pm21-NF3 originated from Pm21-C4 and Pm21-G2, respectively. However, the origin of Pm21-NF1 remains unclear yet.
Diversifying Selection Acting on the Solvent-Exposed LRR Residues of Pm21 Alleles It was confirmed that the broad-spectrum resistance of Pm21 is conferred by a single CC-NBS-LRR-encoding gene . However, it is believed that the resistance provided by such kind of genes is most likely race-specific, which is prone to be overcome by fast-evolving pathogens. For instance, Pm8 from rye (Secale cereale L.), also encoding a CC-NBS-LRR protein, previously provided effective resistance to wheat powdery mildew (Hurni et al., 2013), has lost its resistance in most wheat producing regions with the worldwide utilization. In this study, the value of dN/dS (3.19734) significantly exceeded 1 in the solvent-exposed LRR residues, which is considered to take part in the specific recognition of pathogens (Meyers et al., 1999). This result suggested that the solvent-exposed LRR residues of Pm21 have been undergone diversifying selection and may play critical roles in resistance specificity. This situation is similar to those of race-specific powdery mildew resistance gene Pm3 from wheat (Srichumpa et al., 2005) and Mla from barley (Seeholzer et al., 2010). In several works, the researchers reported that the wheat varieties carrying Pm21 could be infected by Bgt pathogens in different regions Yang et al., 2009). Therefore, combined the data given by evolutionary analysis, it is speculated that Pm21 may be a race-specific resistance gene although it still provides broad-spectrum resistance to the most Bgt isolates so far.
Since 1995 when the translocation line of wheat-D. villosum T6AL.6VS was released, many wheat varieties carrying Pm21 have been commercialized in China, mainly in the middle and lower reaches of the Yangtze River Valley and the southwest wheat-producing regions, where Bgt pathogen is prevailing (Jiang et al., 2014;Bie et al., 2015a;Cheng et al., 2020). The long-time and wide-range application of Pm21 in agriculture would accelerate the evolution of Bgt pathogens. Correspondingly, Pm21 would face to an increasing risk of losing its resistance to powdery mildew. Consequently, it will be a great challenge to sustainably utilize the Pm21 resistance in the future. In this study, a total of 38 non-redundant Pm21 alleles were obtained, which allows to comparatively analyze their fine functions against Bgt pathogens in further researches. Utilization of different Pm21 alleles with functional diversity would be a way to extend the lifespan of Pm21 resistance in wheat production. The marker MBH1, which can reveal genetic diversity of Pm21 alleles in some degree, will be a useful tool when transferring them from D. villosum into common wheat. Other reasonable means would be diversifying use of Pm genes in field, such as pyramiding other effective Pm gene(s) into Pm21-carrying varieties or exploring new Pm genes and developing wheat varieties carrying different Pm genes.
DATA AVAILABILITY STATEMENT
The datasets generated for this study can be found in the GenBank, MG831524-MG831526, MG831528-MG831561, MH184801-MH184806. | 5,804.4 | 2020-05-12T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Neural network-based adaptive sliding mode control for underactuated dual overhead cranes suffering from matched and unmatched disturbances
To improve transportation capacity, dual overhead crane systems (DOCSs) are playing an increasingly important role in the transportation of large/heavy cargos and containers. Unfortunately, when trying to deal with the control problem, current methods fail to fully consider such factors as external disturbances, input dead zones, parameter uncertainties, and other unmodeled dynamics that DOCSs usually suffer from. As a result, dramatic degradation is caused in the control performance, which badly hinders the practical applications of DOCSs. Motivated by this fact, this paper designs a neural network-based adaptive sliding mode control (SMC) method for DOCS to solve the aforementioned issues, which achieves satisfactory control performance for both actuated and underactuated state variables, even in the presence of matched and mismatched disturbances. The asymptotic stability of the desired equilibrium point is proved with rigorous Lyapunov-based analysis. Finally, extensive hardware experimental results are collected to verify the efficiency and robustness of the proposed method.
Introduction
Single overhead crane systems (SOCSs) are widely used in harbors, factories, and workshops. As typical underactuated systems, cranes have fewer control inputs than their degrees of freedom, making the controller design task a very challenging problem. Moreover, according to the transportation requirements, the control goal of crane systems includes accurate trolley positioning and fast payload swing suppression. Specifically, the swing angles need to be suppressed as small as possible during the transportation, so that the payload can be transported to the desired position stably. To achieve such requirements, the couplings between the unactuated states (i.e., payload swing plays a more and more important role in the modern cargo transportation industry. Furthermore, in addition to the issues existed in SOCS, DOCS is of more complex couplings and stronger nonlinearities, which makes the controller design problem even more challenging. Specifically, compared with SOCS, DOCS has more degrees of freedom. Moreover, in addition to non-holonomic constraints, there are geometric constraints in DOCS, which makes the couplings between the system states more sophisticated. Recently, the above-mentioned open and challenging problems attract mounting scholars to study DOCS. Some scholars apply open-loop control algorithms to DOCS [20][21][22][23]. [24] proposes a modified extra-insensitive input shaper to suppress the payload swing and pitch in DOCS, which is validated by simulation and experimental results. In [25], an automatic path planning algorithm for dualcrane is designed that can quickly generate optimized lifting paths even under complex constraints. However, the control performance of these open-loop algorithms degrades drastically under various disturbances. For this reason, some closed-loop control algorithms have been developed [26,27]. To achieve trajectory tracking of multiple mobile cranes, Qian, et al. [28] construct a robust iterative learning controller based on the linearization of the dynamics. Perig, et al. develop a series of linearization methods to linearize DOCS and then achieve optimal control in [29]. [30] designs an adaptive output feedback controller to achieve high-performance control of DOCS with payload hoisting/lowering ability.
Summarizing the existing results, some scholars have conducted preliminary research on DOCS and achieved phased results in certain aspects. However, the control of DOCS still presents many issues, which are drawn as follows: 1) Most controllers applied to DOCS are open-loop controllers. For example, the input shapers, similar to other open-loop algorithms, are less robust when the systems suffer from initial swing disturbances and other various disturbances, which is a nonnegligible drawback in practice. In addition, to address the closed-loop control problems of DOCS, linearizations or approximations are widely adopted, which cannot guarantee the stability of the system towards external disturbances. 2) Worse still, most existing results are based on exact model knowledge (i.e., some parameters of DOCS are involved in the controller law). In practice, the measurement of the precise value of the parameters will greatly reduce transportation efficiency. Furthermore, plant parameters (e.g., payload mass) vary from different transportation tasks frequently, which may greatly degrade the control performance of these methods.
3) Moreover, DOCSs are disturbed by matched disturbances persistently. For example, input dead zones of servo motors and frictions are piecewise discontinuous nonlinear functions, whose precise modeling is an open problem. To eliminate the swing, the trolley needs to move back and forth, which changes the direction of the friction force, making it even more difficult for the controller to achieve satisfactory anti-swing performance. 4) Presently, almost no robust algorithm for DOCS has been developed. For instance, SMC has achieved remarkable results on relatively simple underactuated systems [12]. However, DOCS presents higher degrees of freedom and stronger nonlinearities, thus it is quite difficult to design a sliding surface stabilizing all states simultaneously and complete stability analysis. Based on these observations, this paper proposes an adaptive SMC method based on a neural network and a new sliding manifold to solve the above problems. Specifically, two groups of new auxiliary variables are constructed to transform DOCS into a cascade system. Based on these closely related variables, a new sliding surface is elaborately constructed with more swing-related information incorporated to suppress payload swing. Besides, a neural network is adopted to address parameter uncertainties, so that DOCS can deliver different masses and sizes payloads. Moreover, unmodeled dynamics, such as frictions and input dead zones of servo motors, are also compensated by the neural network. Finally, an adaptive SMC is designed to stabilize both actuated and unactuated states, even when DOCSs suffer from both matched and unmatched disturbances, which guarantees more reliable performance in practical application. The asymptotic stability of the desired equilibrium point is strictly guaranteed through Lyapunov technique without tuning to linearizations or approximations. Furthermore, extensive hardware experiments are collected to validate the efficiency and robustness of the proposed method.
The innovations and contributions in this paper are summarized as follows: 1) Based on two groups of new auxiliary variables, a new sliding surface is elaborately constructed with more swing-related information incorporated, which can stabilize both actuated and unactuated states and suppress swing quickly, even when DOCSs suffer from matched and unmatched disturbances. 2) A neural network is introduced into the SMC controller to address parameter uncertainties and unmodeled dynamics of DOCS and hence improve the system robustness, which better facilitates the practical application of the proposed method.
3) The asymptotic stability of the desired equilibrium point is demonstrated without turning to linearizations or approximations.
The rest of this article is structured as follows: Section 2 briefly introduces DOCS and the control problem. Section 3 depicts the design process of the adaptive sliding mode controller. In Sect. 4, the stability analysis is presented. Simulation results are shown in Sect. 5. Finally, Sect. 6 concludes the paper.
Problem statement
To facilitate description, the following abbreviations are defined: As shown in Fig. 1, a large/heavy payload is cooperatively transported by two cranes that run on the same rail. Each crane is connected to the payload through a hoisting rope (the two connection points are A 1 , A 2 ), whose length is denoted by l (> 0). The distance between A 1 , A 2 is 2a. m 1 , m 2 , respectively, denote the masses of crane 1 and crane 2. The driving forces of two cranes are F 1 , F 2 , respectively. x 1 (t), x 2 (t) indicate the traveling displacements of crane 1 and crane 2. m stands for the mass of the payload. The payload inclination is θ 3 , while the swing angles of the two hoisting ropes are θ 1 , θ 2 , respectively. b depicts the vertical distance between the payload barycenter and the line A 1 A 2 .
It can be observed from Fig. 1 that the DOCS has the following geometric constraints: ls 1ls 2 + 2ac 3 -(x 2x 1 ) = 0, lc 1lc 2 + 2as 3 = 0. (1) The input dead zones of the servo motors are expressed as where Z(F 1 ) and Z(F 2 ) denote the real input forces of DOCS, and h l ( * ) and h r ( * ) represent the lower and upper bounds of the dead zone, respectively. To facilitate the design of the controller, Z(F 1 ) and Z(F 2 ) are redefined as In this paper, Lagrange's equation is utilized to establish the model of DOCS and a similar modeling process can also be found in [31], which is as follows: where L(t) = T(t) -U(t) indicates the Lagrange multiplier, T(t) denotes the kinematic energy of the system, U(t) represents the gravity potential energy of the system, and q k and Q k (k = 1, 2) denote the system states and generalized force, respectively. After derivation, the explicit expression of the kinematic energy T(t) can be arranged as follows: On the other hand, the gravity potential energy U(t) of the system are calculated as follows: Considering the DOCS suffering from both matched and unmatched disturbances and substituting (4) and (5) into (3), the dynamic equations of DOCS are described as follows: where d a1 , d a2 denote matched disturbances, including frictions of the two trolleys and other unmodeled dynamics, and d u1 , d u2 , d u3 represent unmatched disturbances.
To facilitate analysis, (6)-(10) are transformed into the following form: where denotes the unknown disturbance vector, and M(q), C(q,q) ∈ R 5×5 , G(q), and q represent the inertia matrix, the centripetal-Coriolis matrix, the gravity vector, and the control input vector, respectively. The explicit expressions for the matrices in (11) are given as follows: Due to physical constraints, the following assumptions are made for the DOCS.
Assumption 1
The swing angles of the payload and the payload inclination are all bounded, namely Assumption 2 The disturbances d a1 , d a2 , d u1 , d u2 , and d u3 have the following bounded property: Remark 1 (12) is introduced to illustrate that the payload of crane systems cannot run above the rail, which are widely done by [8,14,26], and [30]. Besides, the disturbances in the real world are bound and similar assumption as (13) is utilized extensively by many related works [12,32] and [33]. Symmetry widely exists in underactuated systems [34]. Specifically, the symmetry property M(q) = M(q a ) (q a denotes the actuated vector of DOCS) holds for DOCS, which is easily obtained from the expression of M(q). Furthermore, underactuated systems can be transformed into cascaded forms by utilizing symmetry, which is convenient for the design of the controller. To achieve the transformation, and motivated by [34], the following two groups of auxiliary variables are constructed as where (6)-(9) are utilized, and η 1i , η 2i , η 3i , η 4i , η 1id , η 2id , η 3id , η 4id , i = 1, 2 are defined as follow where x d1 , x d2 denote the reference trajectories of crane 1 and crane 2, respectively, and the desired values for θ 1 , θ 2 (i.e., θ 1d , θ 2d ) are expressed as Taking the time derivative of (14), the dynamic equations of DOCS in (11) can be transformed into the following cascade forms: where (6) Combining (14)- (16) and (18) ⇐⇒ [x 1 , x 2 , θ 1 , θ 2 , θ 3 ,ẋ 1 ,ẋ 2 ,θ 1 ,θ 2 ,θ 3 ] The control objective of this paper is to drive DOCS to the desired equilibrium point (20) and improve the robustness of the system with fast swing elimination. To transport the payload to the desired position, the following property always holds for the reference trajectories: Remark 2 In [26], the geometric constraints (1) are incorporated into the dynamic equation (11) by utilizing implicit functions to obtain a lower-dimensional model, which facilitates the design and stability of the controller. However, the characteristics of the model are hidden and the couplings between states become more complicated, making the designed controller non-intuitive. In this paper, regarding the geometric constraints as constraints and couplings between states, the dynamic equation (11) is transformed into cascade forms (18) by utilizing the symmetry. Finally, the designed controller based on the cascade forms is concise and intuitive, as shown in Sect. 3.
Controller design
In this section, an adaptive sliding mode controller is designed for the DOCS to enhance its robustness. Specifically, a sliding surface based on the cascade forms is constructed. Besides, a neural network is introduced to estimate the unknown parts of the system dynamics.
Based on the newly obtained variables (14), the following sliding manifold is constructed: where λ 1i , λ 2i , λ 3i , i = 1, 2 are parameters to be determined. The explicit expressions of the time derivative of s 1 and s 2 are calculated aṡ where (18) is utilized and h i (m, l, b, t) is the part of the controller related to the system parameters. h i (m, l, b, t) is the nominal value, and the actual value of h i is depicted as h i = h i + h i . Therefore, the unknown parts of (23) can be rearranged as follows: where D 1 and D 2 are the unknown parts ofṡ 1 andṡ 2 , which are rearranged into the following forms: The following neural networks are introduced to approximate the unknown parts D 1 and D 2 : where x = [x 1 , x 2 , θ 1 ,ẋ 1 ,ẋ 2 ,θ 1 , 1] , σ ( * ) = 1 1+e - * is an activation function, W 1 , W 2 denote input and output weights, 1 , 2 are approximation errors, whose upper bounds arē 1 ,¯ 2 . 1 Based on the above analysis, the following adaptive sliding mode controllers are designed: with where K 1 , K 2 are positive control gains to be selected, f 1 and f 2 are introduced to provide robustness in the presence of estimating errors of the radial basis function neural network (RBFNN), and k 1 >¯ 1 , k 2 >¯ 2 .
To construct the update laws of the neural network, the following estimation errors are first defined: The update laws of weights W 1 , W 2 are elaborately designed as follows: where a is a positive parameter and 1 and 2 are positive diagonal parameter matrices. To illustrate the entire control system design process, a block diagram is provided in Fig. 2.
Remark 3 The RBFNN adopted in this paper is capable of universal approximation [37]. However, almost no neural network-based controller for underactuated system is robust to unmatched disturbances [19]. Based on elaborately designed sliding surface and neural network, the proposed method achieves the robust control while DOCS suffers form unmatched disturbances. Specifically, when the payload is disturbed, the cranes need to move back and forth to eliminate the swing, which requires appropriate swingrelated feedbacks being incorporated into the sliding surface. Moreover, the friction force will change drastically during the process, and a large part of it can be compensated by the one-hidden-layer RBFNN with fast approximation capability.
Remark 4 Although the adopted neural network is able to compensate the frictions and the dead zones of the actuators, it is difficult to know the accuracy of the compensation because the frictions and the dead zones are hard to measure. Furthermore, the sliding mode controller is also robust to various disturbances and can partially conquer the frictions and the dead zones of the actuators. On the other hand, the frictions keep changing among the entire transportation. For neural networks, there will be a delay in compensating for signals that change quickly. Hence, once the system reaches the desired equilibrium point, it can be determined that the sliding mode controller and the neural network accurately compensate for frictions and other unmodeled dynamics together.
Remark 5 Although the sliding mode controller is known for strong robustness, the cost is that its control gain may be large, and the switching function will cause chattering problems to the motors and the mechanical systems. These factors hinder the application of the sliding mode controller. Hence, we adopt the neural network to compensate for various disturbances, thereby reducing the control gains of the sliding mode controller and weakening the chattering problem of the mechanical systems.
Stability analysis
Before carrying out analysis, the following property is introduced:
Property 1
If H ∈ R n×n is a positive definite, symmetric matrix, the following property holds: where h 1 , h 2 denote the minimum and the maximum eigenvalues of H, respectively. (6)-(10), the designed controller (27) guarantees that the desired equilibrium point introduced in (20) is asymptotically stable.
Theorem 1 For the underactuated DOCS
Proof For clarity, the proof of Theorem 1 is split into two steps. Specially, it first shows that the adaptive SMC can drive the system states to the sliding surface. After that, the asymptotic stability of the desired equilibrium point is demonstrated.
To further analyze the convergence rate of the sliding manifold, another Lyapunov function candidate is given as follows: The time derivative of V 1 (t) can be expressed aṡ where γ min(K 1 -¯ 1 + k 1 -1 , K 2 -¯ 2 + k 2 -2 ). The control gains K 1 , K 2 in (27) are selected to satisfy the following conditions: Substituting the conditions (41) into (40), the following conclusion holds: where t f ≤ √ 2 γ √ V 1 (0). Based on the above results, it can be concluded that the sliding surface will converge in a finite time t f .
Step 2. Furthermore, the following auxiliary vector is constructed to proceed with the proof of Theorem 1: Taking the time derivatives of ζ a1 and ζ a2 aṡ where (18) and (42) are utilized, and A i , ξ i , ς i , i = 1, 2 are expressed as follows, It can be observed that A 1 and A 2 in (44) are quasi-linear systems. The stability of the quasi-linear system A 1 is analyzed first. To better deal with the linear part of (44), the following state transformation is made: Choose the following nonnegative Lyapunov function candidate: Taking the time derivative of (47) produceṡ where The upper bounds of the first two terms of (48) are unknown. To complete the proof, λ 1i , λ 2i , λ 3i , i = 1, 2 are firstly selected to satisfy: Utilizing the Routh criterion, A 1 , A 2 are Hurwitz. To facilitate analysis, all the roots of A 1 and A 2 are configured as -k 1 and -k 2 . respectively, as long as To this end, H 1 is a Jordan Matrix as follows: Moreover, to render H 1 positive, the minimum eigenvalue λ m1 of H 1 is configured to satisfy Subsequently, the upper bounds of ξ 1 and ξ 2 are calculated as (please see the appendix for the explicit calculation): Using Property 1, the upper bound of (48) is calculated aṡ where (54) and (46) are utilized, and α 1 , α 2 are expressed as α 1 = 2ρ 1 · P 1 -1 · P 1 2 , α 2 = 2 P 1 -1 · ς 1 . (56) Noticing that Q 1 is a quadratic function going upwards, as long as there exists an interval of μ (> 0) guaranteeingV 2 (t) ≤ μ 1 Q 1 < 0, which can be calculated as If the initial value of μ 1 is set to satisfy μ 1 (0) ∈ , one can obtain thatV 2 (t) ≤ 0 and the following conclusion can be drawn: Furthermore, combining (13) with (21), one can obtain Gathering the results in (58) and (59) gives rise to where (42) and (46) is utilized. For the systemζ a2 = A 2 ζ a2 + ξ 2 + ς 2 , the proof process is similar to the above analysis, which will not be repeated here for brevity. Finally, Theorem 1 is proved as follows Remark 6 In fact, as a quasi-linear system, the performance of (44) mainly depends on the linear part of (44), i.e., the matrix A 1 , A 2 . Hence, if the parameters λ 1i , λ 2i , λ 3i , i = 1, 2 of A are configured as the functions of k according to (51), the condition in (50) will be satisfied and then A 1 , A 2 are Hurwitz. Moreover, further tuning k 1 , k 2 within the range (53) by trial and error, the condition in (57) can be satisfied and the asymptotical stability of DOCS at the equilibrium point is guaranteed.
Hardware experimental results
The hardware experimental testbed has been explicitly described in [26], which is not introduced in this paper for brevity. The physical parameters of the testbed are as follows: x 1d (t) = 1.5 1 -e -0.0065t 3 , x 2d (t) = 1.5 1 -e -0.0065t 3 + 0.9.
The initial values of system states are set as It is worth pointing out that the control gains are not changed during the entire experimental process.
Experiment 1
In this group of experiments, the controller needs to overcome matched disturbances such as the dead zones of actuators and frictions. Specifically, the frictions and the dead zones of the actuators are both piecewise and discontinuous functions, which cause positioning errors of the two trolleys. Furthermore, the friction forces change with the positions and speeds, and the friction forces of the two trolleys are also different. As shown in Figs. 3 and 4, the proposed method can drive the two trolleys to the desired positions at about 10 s. Moreover, the swing angles of the payload also converge quickly at the same time.
Meanwhile, the 2-norm of the output weights W 1 and W 2 vary diversely, which illustrates that the frictions of the two trolleys are different.
Experiment 2
In this experiments, the parameters of DOCS are changed to m = 4.308 kg, l = 0.6 m to verify the robustness of the proposed method against parameter uncertainties. To better verify the robustness and anti-swing performance of the proposed control method, the classic coordination controller of DOCS in [26] and the neural network-based adaptive antiswing controller (NNAAC) in [19] are utilized as comparison in this experiments. The control gains of the NNAAC are elaborately tuned as follows: The control gains for the coordination controller are elaborately constructed as λ = 1, k = 5, k p1 = 1000, k d = 300, k a1 = 400, = 0.05, k p2 = 800, From Fig. 5, it can be seen that even when the payload mass and the length of the hoisting ropes are all changed, the proposed controller can still compensate uncertain parameters and unmodeled dynamics simultaneously. In contrast, as shown in Figs. 6 and 7, the coordination controller and the NNAAC fail to achieve parameters adaptation and friction compensation simultaneously. As a result, position errors and obvious residual payload swing are exhibited. Specifically, the position errors of two trolleys for the proposed method are 0.002 m and 0.004 m, while the NNAAC has positioning errors of 0.012 m and 0.01 m and the trolleys of the coordination controller are still 0.029 m and 0.033 m away from the desired positions when they stop. Moreover, it can also be seen from Fig. 8 that the adopted RBFNN of the proposed method compensates most of the matching disturbances, but the network of the NNAAC has poor estimation ability for such large and varying disturbances. 2
Experiment 3
In practical application, when the payloads are replaced, the payloads cannot be stabilized immediately. Hence, there will exist residual payload swing, which takes a long time to eliminate and thus badly reduces transportation efficiency. To test the performance and robustness of the controller under this working condition, initial swings are inflicted on the payload. As can be seen from Fig. 9, even The 2-norms of the neural network weights W 1 , W 2 , V 1 , V 2 and the control inputs F 1 , F 2 (blue solid lines: the proposed method, green chain lines: the coordination controller in [26], and red dash lines: the neural network-based adaptive antiswing controller in [19]) when apparent initial swings exist, the proposed method still accomplishes accurate positioning and satisfactory anti-swing performance, which verifies the robustness of the proposed method. However, from Figs. 10 and 11, although the two comparison methods can drive the trolley to the desired position, there is still obvious residual swing afterward, which cannot be eliminated after 20 s.
Experiment 4
To fully validate the robustness of the presented controller, external mismatched disturbances are exerted on the payload abruptly. As shown in Fig. 12, the maximum payload swing angle is up to about 7 deg at about 26 s. From Fig. 12, it can be observed that the system is re-stabilized quickly (in about 2.5 s) after being disturbed. For comparison, the NNAAC is also tested with an external disturbance. However, since the cooperative controller is similar to the PD (Proportion Differentiation) controller, the coordination controller is not robust in counteracting the sudden disturbances (see Fig. 13). On the other hand, because no swingrelated information is incorporated into the sliding surface of the NNAAC, the trolleys fail to react to the swing motion and eliminate the swing (see Fig. 14).
Conclusions
Utilizing the symmetry property, the dynamic equations of DOCS are transformed into cascade forms and a new slid- ing manifold is constructed afterward. Based on the sliding mode surface and one-hidden-layer neural network, an adaptive SMC is proposed. Furthermore, the asymptotic stability of DOCS is guaranteed, even when DOCS suffers from both matched and unmatched disturbances. Simultaneously, the adopted neural network addresses the parameter uncertainties, frictions, input dead zones, and other unmodeled dynamics. Finally, a series of experimental results verify the effectiveness and the strong robustness of the proposed control scheme. In our future works, the hoisting and lowering of the payload and the actuators' saturation problem will be considered. | 6,085.8 | 2022-01-07T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Coupling Heat Conduction and Radiation by an Isogeometric Boundary Element Method in 2-D Structures
We propose an ecient isogeometric boundary element method to address the coupling of heat conduction and radiation in homogeneous or inhomogeneous materials.e isogeometric boundary element method is used to construct irregular 2Dmodels, which eliminate errors in model construction. e physical unknowns in the governing equations for heat conduction and radiation are discretized using an interpolation approximation, and the integral equations are nally solved by Newton–Raphson iteration; it is noteworthy that we use the radial integration method to convert the domain integrals to boundary integrals, and we combine the numerical schemes for heat conduction and radiation.e results of the three numerical cases show that the adopted algorithm can improve the computational accuracy and eciency.
Introduction
Since Hughes et al. [1] proposed the isogeometric boundary element method, which has been developed like a mushroom, and its principle is using the B-spline interpolation function instead of the traditional Lagrangian interpolation basis function. e traditional nite element and boundary element techniques use the Lagrangian interpolation basis function; therefore, it is necessary to convert the geometric model created by computer aided design (CAD) into a mesh rst, and then we calculate the mesh; such an operation increases the error of mesh reconstruction and waste of time; the isogeometric boundary element method inherits the advantages of dimensionality reduction and deals with in nite domains in the traditional eld. Simultaneously, the CAD model can be directly imported into CAE for calculation, simplifying the process of mesh reconstruction and reducing the error of the geometric model caused by tedious preprocessing. In addition, the IGABEM can provide highorder continuity and exible re nement schemes, and it can be extended to many 2D modeling elds. erefore, the IGABEM has been widely studied by many scholars and has been successfully applied in many elds such as elasticity [2,3], uid-structure coupling [4][5][6], shape optimization [7,8], and acoustic [9][10][11][12][13][14][15][16]. e heat conduction and radiation require coupling the two physical elds, which can be signi cantly deviated if only heat conduction or radiation is considered. Many scholars have studied the coupled heat conduction and radiation at home and abroad. Furmanski and Banaszek [17] proposed a method based on the combination of nite element space discretization and iterative skills and used it to study the coupled heat conduction and radiation in a rectangular region.
Kong and Viskanta [18] learned the associated heat transfer and radiation in two-dimensional rectangular glass media by using the discrete coordinate method. Lacroi et al. [19] studied the radiation coupling heat transfer of a rectangular translucent medium irradiated in a speci c direction. Zhou et al. [20] combined the nite volume method with the spectral band model to solve the heat transfer in the absorbing and scattering nongray medium. Luo et al. [21] solved the two-dimensional coupled heat transfer and radiation in the isotropic rectangular medium by the ray trace node analysis method. Mondal and Mishra [22] combined the Boltzmann method with the finite volume method to deal with the coupled heat transfer and radiation with heat flow and temperature boundary conditions. Gu et al. [23][24][25][26][27][28][29][30] studied the heat transfer of coating structure based on the isogeometric boundary element method, and they also used the isogeometric boundary element method by calculating the effective property of steady-state thermal conduction in 2D heterogeneities with a homogeneous interphase. Fu et al. [31][32][33] studied the boundary collocation method for anomalous heat conduction analysis in functionally graded materials. Nie et al. [34][35][36][37][38] studied the noniterative inversion of heat flow boundary conditions and thermal stress estimation of gradient materials based on the refined integration finite element method. Chen et al. [39][40][41][42][43] studied the heat conduction analysis of two-dimensional and threedimensional geometric boundary element methods. e study in this paper is the first to apply the isogeometric boundary element method to coupled heat conduction and heat radiation. Compared with the traditional boundary element method, this method not only has unparalleled advantages in model construction but also is closer to the software solution in terms of computational accuracy. e remainder of this paper is organized as follows: e theory of the isogeometric analysis is given in Section 2. e coupled heat conduction and radiation are put in Section 3. en, Section 4 gives the discretization of the integral equation, and Section 5 gives how to discrete equations and form the system equations. Section 6 gives some numerical examples, and Section 7 gives details of the summary.
Isogeometric Analysis
In this section, we focus our attention on methods of modeling construction by using the isogeometric boundary element method, which is a variant of the BEM based on the use of NURBS basis functions. Due to the precise representation of geometry, the isogeometric boundary element method can be applied to the field of heat conduction and radiation. erefore, a brief introduction of B-splines and NURBS is given in this part. For the interested reader, a complete description can be found in the study conducted by Piegl and Tiller [44].
A B-spline is defined using a group of piecewise polynomials, which are determined by the following "knot vector" of nondecreasing values Where a is the knot index, p is the curve degree, and n is the number of basis functions or control points P a . Each ξ a ∈ Ξ is called a knot. e B-spline basis functions of degree p are defined recursively; starting with p � 0, we define: us, as is given in [44], a whole B-spline curve can be defined by n basis functions (in Equations (1) and (2)) and control points P a . However, in the implementation of BEM, a discretized form of the integral boundary equation is commonly used, in which computations are focused on a.
A single knot span corresponds to a single element in conventional BEM implementations. erefore, in this work, the descriptions of boundary geometry or physical quantities are given in a knot span with p + 1 nonzero basis functions, and the values of which can be obtained by Equations (1) and (2). A pth degree B-spline curve in a knot span (with p + 1 non-zero basis functions) is constructed by mapping from the parameter space to physical space, as shown in the following equation: where P a denotes the set of control point coordinates and x � (x, y, z) is the location of the physical curve corresponding to the spatial coordinate ξ in parametric space. NURBS, a dominant tool used to describe curves and surfaces in CAD systems, is developed from B-splines and can offer significant advantages due to their ability to describe circular arcs and other conic sections exactly. e NURBS geometry is a weighted form of the B-spline definition, i.e., where R a,p are the NURBS basis functions, which are defined by where a denotes the control point index, N a,p is the B-spline basis function from equation (2), and w a is a weight associated with control point P a when all the weights are of equal value. e NURBS curve will degenerate to a B-spline curve. At this point, it is useful to emphasize that the control points do not all lie on the spline curve.
Basic Governing Equations of Heat
Conduction and Radiation e solution to the heat conduction and radiation problem needs to consider two basic equations simultaneously. Due to the complexity of radiation heat transfer, the main difficulty of solving this problem is transferred to the thermal radiation source term. We first consider the steady-state linear heat conduction and radiation problem. is paper expounds on using the isogeometric boundary element method to solve this problem. 2 Mathematical Problems in Engineering
Heat Conduction Integral Equation.
For the steady-state linear heat conduction and radiation problems, it is assumed that the thermal conductivity k is constant for homogeneous material in the calculation region Ω. e heat conduction control equation with a radiative heat source term can be expressed as follows: where T(x, t) represents the temperature of the point x, and because it is a steady-state problem, no time terms exists; k is the thermal conductivity; q r V (x) is the heat source value of the point x. Green's function is introduced, and the weighted integral is carried out in the calculation area by introducing the weight function G. Equation (7) can be obtained by taking the weighted integral in the calculation area, which is as follows: en, we use the partial integration method and Gauss divergence theorem to solve Equation (8).
e integral equation can be obtained by partial integration operation as follows, and the specific derivation process can be seen in [39]: where the coefficient c(x) � 1 when the source point x is located in the domain, and c(x) � 0.5 when the source point is on the boundary of the structure; the integrals in the above formula are expressed as follows: among them, q c � −k(zT/zn) is the heat flux caused by heat conduction; for two-dimensional problems, G � (1/2π)ln(1/r), for three-dimensional problems, G � (1/4πr), and to solve integral Equation (9), we must know the q r V in the equation, so we introduce the radiation integral equation.
Heat Radiation Integral Equation.
To solve the boundary equation of the heat conduction problem, the radiant heat source q r V should be solved first, so the radiation heat transfer of the participatory medium would be considered. We give the integral equation of radiation heat transfer on the boundary, and its boundary medium has the ability to absorb and emit radiation as follows: e integral equation of radiant heat source is as follows: Equation (11) can be applied when the source point is located at the boundary and inside, while Equation (10) is only applicable to the edge. Definition of each parameter can be seen in [45].
Discretization of the Integral Equation
Equations (9)-and (11) are the basic integral equation for solving the steady-state heat conduction-coupled radiation problems by the isogeometric boundary element. Peng et al. [45] solved the domain integral and converted them into the boundary integral by the radial basis function method which is similar to the traditional internal grid division. In this work, the radial basis function method is used to deal with all the domain integrals, and the Gaussian integral formula is used to calculate the converted integral: Equation (13) is the calculation formula of the radial integration method, α � 1 for two-dimensional problems and α � 2 for three-dimensional problems.
Discretization of the Heat Conduction Equation.
Due to the heat source term, q r V (x) was contained in domain integral Equation (8).
erefore, we use the interpolation function to express it as follows: where q rH V is the value of the radiation source term on point H,and M I (q) is the global interpolation function given as follows: where the repetition index represents periodic summation, and the range of I and j is 1 ∼ M S . Bringing Equation (13) into Equation (9) and then using the radial integral method Equation (12), we can get Mathematical Problems in Engineering
(15)
By bringing Equation (11) into Equation (9), the domain integral can be transformed into the boundary integral and only needs to discretize the elements in the calculation domain. For the linear element of a two-dimensional problem, the number of boundary elements is the same as that of boundary nodes. It is assumed that the boundary is discretized into N e th boundary elements and intradomain layout N i th points. e total number of nodes is N e + N i . We take all nodes as source points in turns, and we can use the radial integration method and can get the following equation: where M N A is the matrix of N A * N A , and similar to the calculation of the boundary integral, the following equations can be obtained by Gaussian numerical integration and system set of all boundary elements in the unified integral equation: where U is the normalized temperature kT; H is a matrix of N A * N A ; for the matrix elements corresponding to the internal point temperature, only the diagonal has a value of 1, and the other elements are 0; G is a matrix of N A * N B .
Discretization of the Heat Conduction Equation.
Since the fundamental equation of thermal radiation we gave in Equations (11) and (12), the method of converting the domain integral to the boundary integral is the radial integral method. e black body radiation force E b in the domain integral is unknown. We use the interpolation basis function to represent the following equation: where N I (q) is the global interpolation function; E I b is the black body radiation force of the node; by putting Equation (18) into Equation (12) and using the radial integration method, we can get (20) e domain integral in equation (11) can be obtained using the same method, and the only difference is K 0 (q, p) instead of K(q, p); a is the absorption ratio, which can be solved directly by the radial integration method if it is a constant or an already given known function. if a is just the value of the distribution given at some nodes, we can solve equation 20 and represent it by a global interpolation function.. Since all the parts or constants are known within the integral, it can be solved analytically using Gaussian integration for the calculation.
Without domain integration, the integral boundary equation can be fully obtained by bringing the transformed integral equation into the unified integral equation. Simply by discretizing the boundary unit into linear or quadratic boundary units and using the Gaussian basic formula on each boundary integral, it can be calculated on each boundary cell. Since the radial basis function is used to approximate the unknown quantity in conversion, we often arrange some points inside to get the exact result.
We discretize boundaries into N e th boundary elements and arrange N i internal points in the domain. e total number of points is N e + N i . Taking each node and internal point as the source point in turn, the boundary integral in the unified equation can be calculated to obtain the following matrix equation: In Equations (21) and (22), the matrix Z and Z ′ are N A * N A order matrix; the algebraic equations set each boundary node i as the source point P in turns. e algebraic equations can be obtained by calculating the boundary integral in the formula in each boundary elements as follows: where E b and q r are N A th order sequence vector composed of the boundary blackbody radiation force and boundary radiation heat flux; E b (Q m ) is N A, composed of the medium blackbody radiation force rank vector. For the coefficient matrix of the element expression of square matrix of orderN A , we can refer to the work of Peng et al [45]; then we use the same method in equation (12). e following system of algebraic equations can be obtained: Equation (23) is the boundary node equation, and Equation (24) is applied for all nodes. In this way, the heat radiation equation is discretized.
System of Equations for Heat Conduction and Radiation Coupling Problem and Its Solution
When there is no internal point, all conditions are temperature boundary conditions. e column vector E b is known so that it can directly obtain the unknown boundary quantity q r and then bring it to get the radiative heat source q r V and finally bring the radiative heat source, the radiative heat flux, and the heat conduction boundary condition together into the discrete equation system, and the heat conduction boundary conditions are brought together into the discrete system of equations. We can obtain all the edges by solving the linear system of equations. e unknown quantities in Equation (24) on the boundary are obtained by solving the linear equations. We inverse to solve the unknown q r , which is as follows: Finally, the matrix equation can be obtained by merging. e total power radiated per unit area of a blackbody surface (called the radiance or energy flux density of the object) is directly proportional to the fourth power of the blackbody's thermodynamic temperature T (also known as the absolute temperature).
where x in Equation (26) is the unknown temperature and the column vector composed of the heat flux of the boundary node and the internal point; y is the column vector obtained by multiplying all the known boundary quantities by the corresponding coefficients in Equation (26), and the column element corresponding to the boundary node with available temperature in matrix B is 0. en, we give three kinds of boundary conditions, namely, temperature boundary condition, heat flow boundary conditions, and hybrid boundary conditions, as shown in the respective equations: For the matrices A and B and vector y in Equation (26), the set of groups is different because the values of the elements of their matrices and vectors are related to the positions of the source point p and the field point Q. e elements in matrices A and B and vector y are denoted as A ij , B ij , and y i , and when i and j lie on the boundary, we can get Equation (28) is the point j in the first type boundary conditions.
Equation (29) is the point j in the second type boundary conditions.
. (30) Equation (30) is the point j in the third type boundary conditions. When the point i and point j are boundary points and internal points, respectively, and when they are all internal points, we can find the solutions in [45]. Since Equation (21) is a nonlinear equation system about temperature, it can be solved by the Newton-Raphson iterative method, which can be seen in [39,45,46].
Rectangular Example.
A rectangular domain with a size of 2L * L is composed of the blackbody wall, filled with translucent medium with an absorption ratio a � 1 and thermal conductivity k � 226.76W/(m · k). e bottom temperature of the rectangle is maintained at T 0 � 1000K. Other wall temperatures are T 1 � 500K, and the emissivity of the four walls is ε � 1; to analyze the heat conduction coupled radiation problem of the two-dimensional rectangular model numerically, we discretize the boundary into 60 equally spaced linear cells (20 on the long side and 10 on the short side) and uniformly arrange seven interior points along the symmetry line of x � L. e temperature distribution of the inner points is calculated under the condition of thermal coupling radiative heat transfer. e results of the IGABEM are compared with the results of the COMSOL software Figure 1. e algorithm's accuracy is verified by comparing the IGABEM results with those of COMSOL software. Figure 1(a) shows the 2-D rectangular solution domain and its boundary conditions, and Figure 1(b) shows the boundary cells and their interior points (red points and black points). Firstly, we discuss the comparison between different algorithmic solutions of COMSOL and the IGABEM with the example of the interior points of the black points, and Figure 2(a) shows that the results of the two algorithms are similar. Table 1 shows the comparison between the calculation results of the COMSOL software and the IGABEM. Since there is no analytical solution in the example, COMSOL can be seen as an international standard solution algorithm and can be approximated as an analytical solution to a certain extent. e maximum error is 6.5E − 3 and the minimum error is 1.1E − 4, which verifies the accuracy and stability of the IGABEM algorithm. Due to the more minor degrees of freedom, the computational efficiency of the IGABEM for 2-D problems is better than that of software in terms of time and accuracy. As shown in Table 2, due to the rectangle being an axisymmetric graph, the temperature of the inner points distributed along the axis of symmetry is basically the same.
We arrange the red points and black points in the rectangular area as shown in Figures 1(b), 2(a), and 2(b),which are the temperature distribution curve of red points and black points. e error results only reach 2% when x � 1.0L and y � 0.5L, which is lower than 1% at the other points. e main reason is that the number of discrete Mathematical Problems in Engineering parts of the boundary is too small. In the next example, we can improve the accuracy by increasing the number of discrete parts. Figures 3(a) and 3(b) show the temperature distribution diagram and isotherm in COMSOL. e gure shows that the closer to the bottom boundary, the higher the temperature and the closer to the upper boundary, the lower the temperature. e distribution is not uniform, and the isotherm appears concentrated on both sides of the bottom, indicating that the temperature changes more, in line with the laws of thermodynamics, and more to verify the stability and accuracy of the algorithm.
NURBS Curve Model for Homogeneous and Heterogeneous Material.
e isogeometric boundary element method can eliminate model errors, so its advantages can be applied to the eld of thermal radiation, which we use for model construction. A blackbody wall is composed of a square domain with a size of 1 * 1 , and a semicircular domain is composed of a quadratic NURBS curve. e interior is lled with a translucent medium with an absorption ratio of a 1.5 and a thermal conductivity of k 400.76W/(m · K), and the bottom temperature is maintained at T 0 800K, other wall temperatures are T 1 200K, and the emissivity of the four walls is ε 1, which can be seen in Figure 4(a). To numerically analyze this conductive coupled radiative heat transfer problem, we discretize it into a linear unit and arrange 14 symmetry points uniformly along the x 0.5 symmetry line in Figure 4(b). e temperature distribution of each internal point is calculated under the condition of thermal conductivity coupled with radiation heat transfer. e algorithm's accuracy is veri ed by comparing the IGABEM results with COMSOL results.
Comparing Figures 5(a) and 5(b), the NURBS curve can be seen with di erent control points but the same weight coe cients; due to the various control points, the same control points' shape varies greatly. Comparing Figures 5(b) and 5(c), we can see that the two Bézier curves with the same control points but di erent weight coe cients have approximately the same trend. Among the three curves, the direction of the second and third curves is roughly the same. Due to the same control points, the local di erence is relatively small. In practical engineering, the standard model components do not exist, and the local modi cation is more conducive to modeling and establishing an accurate twodimensional structural model. Based on this, we build a twodimensional model with a square bottom, as shown in Figure 4(a), and some internal points can be seen in Figure 4(b). Figure 6(a) shows the initial NURBS curve and the position of control points. After normalizing the "knot vector," the parameter space vector of the boundary element can be obtained. erefore, only one boundary element is formed, and the parameter space intervals are given, respectively. By splitting each NURBS element equally and inserting new knots, the new control point sequence "knot vector" and weight factor vector after re nement can be obtained. For example, we insert a new control point at the middle point of each initial NURBS cell parameter space to get the re ned control point sequence shown in Figure 6(b). After the Bézier extraction operation, a new set of control point sequences is obtained, as shown in BE operation knots. For "initial knots" in Figure 6(a) and "inserted knots" in Figure 6(b), we can nd that inserting a new point will also change the position of the original point but still describe the same curve. However, the Bézier extraction operation does not change the position of the original control points but inserts a new control point at the middle point of some elements. en, the new control point sequence and Bézier extraction sequence, when the initial NURBS cells are divided into 3, 4, 5, and 6 subunits, are given in turn, as shown in Figures 6(c) and 6(f ), respectively. In the model, its control points are (0, 1), (0.2, 1.7), (0.8, 1.7), and (1, 1), and the weight coe cient is (1, 1, 1, 1).
We give the temperature error table of the IGABEM and COMSOL by the black points, as shown in Table 3. As the y value increases, the temperature value gradually decreases, and it can be seen that the accuracy increases with the increase of the number of discrete parts. When the red points get distributed, we give the error analysis diagram of y 0.7 as shown in Table 4. e error is less than 1%, and the convergence of the result is better than that of the example with few discrete boundaries. Figure 7 shows the error graph of black and red points, respectively. Figure 7(a) shows as the value of y increases, the rate of change of its temperature value becomes slower and slower; that is, the rst derivative of the temperature function is less than zero. e reason is that the smaller the value of y, the higher the temperature and the faster the di usion. When the points is close to the top, the temperature is less di used and slower, which is corresponding to the laws of thermodynamics. Figure 8 shows the COMSOL model temperature distribution diagram and isotherm gure. From Figures 8(a) and 8(b), we can see that the temperature distribution is in line with the laws of thermodynamics. e closer the isotherm is to the bottom corner of both sides, the denser it is, indicating the magnitude of temperature change; in the structure's design, we can focus on this part of the design and research. It can reduce the probability of temperature stress on the structure to achieve the purpose of structural optimization design and increase the safety and reliability of the structure.
To study the heat transfer properties of heterogeneous materials, we change the thermal conductivity in the previous example into a linear function varying with the value of y, that is, k(y) 400 − 0.2T(y). We discuss the concerned temperature error of the inner point distribution .
e boundary conditions and internal points are shown in Figure 9.
Compared with the homogeneous material, the thermal conductivity is a constant, so the error is relatively small, but for heterogeneous materials whose thermal conductivity is a function of the error is 3.2%, this error is relatively large but acceptable, and the correctness and stability of the algorithm are veri ed. Figure 10 and Table 5 list the temperature distribution between the circular domain and square domain constructed by a two-dimensional Bézier curve. e error between the IGABEMand COMSOL result is no more than 3%, indicating the IGABEM has a sure accuracy in solving nonlinear coupled heat transfer problems. Table 6 gives the e ciency comparison of two calculation methods; from the table, we can see that the computational e ciency of the IGABEM is better than that of COMSOL under the same discrete conditions.
Circle Subtract Square Model.
We give a circle with radius as 1 and center of the circle as the point (0, 0). We remove a square with a length of 1 from its upper part. e wall is a blackbody wall with the absorption ratio a 1 and the thermal conductivity of k 226.76; the red edge temperature of Figure 11(a) is 800K, the other edge temperature is 200K, and the thermal radiation emissivity of the wall is varepsilon 1. As shown in Figure 11, Figure 11(b) is the inner points graph of the model. e red points are the transverse distribution and the black points are the vertical distribution. We discretize it into boundary elements and solve it and compare the results with COMSOL results. We constructed the model with control points (0, 0), (0.2, 0.7), (0.8, 0.7), (1, 0), (0.8, 0.7), and (0.2, 0.7), whose weight coe cients are (1, 1, 1, 1). Figure 12(a) shows the temperature of the red points in Figure 11(b). Since the 2D model is axisymmetric along the yaxis, its temperature should be distributed symmetrically. From the two numerical calculation results, it can be seen that the calculation result of COMSOL is slightly larger than that of IGABEM, but within a reasonable error range, the maximum error is 1% to meet the requirements. Figure 12(b) shows the temperature of the black points in Figure 11(b). With the increase of y value, the temperature of the inner point will rst increase and then decrease. However, since the upper inner point is close to the boundary condition with a temperature of 800K, the temperature of the upper inner point is higher than that of the lower inner point. Figure 13(a) shows the cloud diagram of temperature distribution, Figure 13(b) shows the isotherm diagram, Figure 13(c) shows the cloud diagram of We can see that when the red point is symmetrical along the X axis, the temperature value of its inner point is also symmetrical along the X axis. When the black point increases along the Y coordinate on one side, its temperature value decreases in accordance with the law of thermodynamics.
Conclusions
is work applies for the IGABEM to solve two-dimensional steady-state heat conduction and radiation problems. e NURBS method is used to construct a smooth geometric model, which eliminates geometric errors and improves the computational accuracy. e radial integration method converts the domain integrals due to variable coe cients into boundary integrals. Numerical results show that the developed algorithm e ectively solves the nonhomogeneous steady-state coupled heat conduction and heat radiation problems with variable coe cients. Since the theory is essentially the same, in the future, the algorithm can be extended to transient analysis also to solve three-dimensional problems. From the proposed numerical example, a stepping stone is provided for the implementation of the fully integrated CAD and CAE software.
Data Availability
Data are openly available in a public repository. | 7,081.2 | 2022-09-07T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
A molecular dynamics study of N-A-S-H gel with various Si/Al ratios
The understanding of sodium aluminosilicate hydrate (N-A-S-H) gel is still limited due to its complex and amorphous structure. Recently, molecular dynamics simulation has provided a unique opportunity to better understand the structure of N-A-S-H gel from nanoscale. In this work, the N-A-S-H gel structure was obtained by simulating the polymerization of Si and Al monomers by molecular dynamics. The simulated polymerization process is in good agreement with the experimental results especially in terms of the reaction rate of Si and Al species. The atomic structural features of the N-A-S-H gel were analyzed in terms of bond length and bond angle information, simulated X-ray diffraction (XRD) and Q n distribution. A significant finding is the existence of pentacoordinate Al in all simulated N-A-S-H structures, indicating that pentacoordinate Al in geopolymer does not only come from raw material. Besides, the results show that a smaller Si/Al ratio led to a more crosslinked and compacted structure of N-A-S-H gel.
INTRODUCTION
Sodium aluminosilicate hydrate (N-A-S-H) gel is the primary reaction product of geopolymers [1].It is known as a three-dimensional disordered structure consisting of interconnected Si and Al tetrahedrons, Na + ions and absorption water [2].Some basic knowledge of N-A-S-H gel has been obtained through commonly used materials characterization techniques, including X-ray diffraction (XRD) analysis, Fourier transform infrared (FTIR) spectroscopy and nuclear magnetic resonance (NMR) spectroscopy, etc. [3][4][5].
4th International RILEM conference on Microstructure Related Durability of Cementitious Composites (Microdurability2020) However, due to the complex nature of N-A-S-H gel, these experimental techniques are not able to fully unravel the mysteries of N-A-S-H gel.
Recently, molecular dynamics (MD) simulation has offered an exciting opportunity to understand the N-A-S-H gel structure down to nanoscale.Lolli proposed a new molecular model of N-A-S-H gel based on sodalite framework [6].A sodium aluminosilicate glasses model was built from an initial configuration of silica glass to investigate the properties of geopolymer binders by Sadat [7].Zhang carried out reactive molecular dynamics to build N-A-S-H gels following a geopolymerization process [8].From these researches, it is not hard to find that they all performed molecular dynamics in different ways to model N-A-S-H gel.Most of these simulations started from an available structure (e.g.sodalite) which is analogous to N-A-S-H gel structure.Zhang is the only one employing reactive molecular dynamics to form N-A-S-H gel, which is closer to the reality.Besides, some contradictory findings have been found in terms of the effect of Si/Al ratio [6,7,9].Further effort is required to gain a deeper understanding of N-A-S-H gel with a wide range of Si/Al ratios.
In this study, the formation and structure of N-A-S-H gels with Si/Al ratios from 1.0 to 4.0 are investigated by molecular dynamics simulation.Unlike most of the researches mentioned above, the N-A-S-H gels were obtained from the reaction of Si monomers and Al monomers in this study.This methodology to construct N-A-S-H model was derived from the polymerization of silica sols introduced by Feuston [10].Reactive force field (ReaxFF) was adopted in this work to carry out molecular dynamics simulation.This is the main difference compared with Zhang's research, in which FG potential (a reactive potential developed by Feuston and Garofalini) was used.ReaxFF can yield more accurate and reliable results compared with FG potential.The main reason is that ReaxFF divides the system energy up into ten partial energy contributions [11], while FG potential only contains a two-body and a three-body interaction terms [12].With ReaxFF, the process of polymerization was simulated in this study.Detailed structural analysis was performed on the final simulated structure.
METHODOLOGY
A simulation box containing individual Si(OH)4, Al(OH)3 and NaOH molecules was built by software PACKMOL [13] as the initial structure for molecular dynamics simulation.The size of the simulation box was 25×25×25 Å and the density of the system was set at around 2 g/cm 3 .Table 1 shows the composition and density of all the initial configurations.The composition of the initial configuration covers a wide range of Si/Al ratio from 1.0 to 4.0 with a fixed Na/Al ratio at 1. Molecular dynamics simulation was executed on the large-scale atomic/molecular massively parallel simulator (LAMMPS) [14].The whole simulation was run under the Canonical 4th International RILEM conference on Microstructure Related Durability of Cementitious Composites (Microdurability2020) ensemble (NVT) condition.The system was first relaxed at 300 K for 100 ps.Then, the temperature was raised linearly up to 2000 K for the next 100 ps and subsequently kept at 2000 K for 1 ns to accelerate the reaction.This was followed by a cooling process at a rate of 2.2 K/ps.Finally, the system was equilibrated at 300K for 200 ps.The total duration for the whole process was 2.15 ns with a time step of 0.25 fs.ReaxFF was employed in all simulations to describe the interatomic interactions among the atoms [11].The detailed potential functions and parameters can be found in the literature [6].
Si
The N-A-S-H structure was then extracted from the final configuration to analyze its structural features.Visual Molecular Dynamics (VMD) software [15] was used to view the snapshot.Based on the coordinate information, bond length and bond angle were calculated.The XRD patterns were simulated in LAMMPS.Q n distribution was calculated to reveal the topology of the N-A-S-H gel.
Polymerization process
Fig. 1 shows the simulation process in terms of the evolution of Q n , including Si sites and Al sites.The superscript n refers to the number of bridging oxygens that a Si or Al atom is bonded.Fig. 1 only shows the simulation with the Si/Al ratio of 1.0 as the other Si/Al ratios have an almost identical trend.As can be seen in Fig. 1, all Si and Al in the system existed as Q 0 at the beginning, because all of them were separated monomers.For Si sites, Q 0 decreased immediately with the simulation time.Meanwhile, Q 1 emerged first, followed by Q 2 , Q 3 and Q 4 in sequence.That means Q 0 reacted and transferred to higher polymerized sites Q n (n=1-4).This simulated process is consistent with geopolymerization reaction in practice [16], where monomers reacted to form oligomers and then oligomers polymerized to form large clusters.
The evolution of Al sites followed a similar pattern to that of Si sites, which also experienced polymerization process.However, two major differences can be observed.First, all Al 0 sites were consumed completely at the end regardless of Si/Al ratios.Second, some Al 5 sites (pentacoordinate Al) and traces quantities of Al 6 sites were formed besides Al 1 , Al 2 , Al 3 and Al 4 .The presence of the pentacoordinate Al will be further discussed in Section 3.2.3.4th International RILEM conference on Microstructure Related Durability of Cementitious Composites (Microdurability2020)
Atomic structure of N-A-S-H model 3.2.1 Bond length and bond angle
To describe the atomic structure of the obtained N-A-S-H gel, the average bond length for Si-O and Al-O was calculated in Table 2. Si-O bond length is mainly located at 1.61-1.63Å, while Al-O bond length is a bit longer (1.83-1.85Å).These results are in line with the simulation results in [9,17] and experimental results in [18].As Si/Al ratio increases, both Si-O bond and Al-O bond become longer.
X-ray diffraction
X-ray diffraction pattern can further determine the amorphous nature of the obtained N-A-S-H.In geopolymers, the typical hump for N-A-S-H gel is located from 25°-40° (2θ) [3].A well-matched hump from 20°-40° can be found for all XRD patterns with different Si/Al ratios in Fig. 3.The Si/Al ratio has an apparent effect on the structure of N-A-S-H gel structure.As Si/Al ratio increases, the hump shifts to smaller angle.Similar results were obtained in Lee's research [20], where the typical hump is located at 28.54°, 26.85°, and 26.27° (2θ) for the geopolymer paste with Si/Al ratio of 1.5, 3.5 and 4.0, respectively.According to Bragg's law, a smaller angle corresponds to a larger interplanar spacing.That means N-A-S-H gel structure with higher Si/Al ratio has a larger interplanar spacing, indicating a less compact structure.This result can be supported by the relationship between bond length and Si/Al ratio, as mentioned in Section 3.
is the most important structural parameter to explain how Si and Al are linked in N-A-S-H gel framework.Fig. 4 shows the Si sites (Si n ) distribution for all N-A-S-H structures with different Si/Al ratios.Four types of Si n units, Si 1 , Si 2 , Si 3 , Si 4 , were found in all N-A-S-H gel structures.According to these studies [21,22], Si mainly exists in the form of Q 4 in N-A-S-H gel framework as N-A-S-H gel is a 3D network structure.Q 1 and Q 2 , present at the surface of the gel, are supposed to be only occupied a small amount.However, 30%-56% of Si 1 and Si 2 can be found in the simulated N-A-S-H gels, which is higher than that in experiments [23].This is due to the surface effect.The simulations took place in a very small box.As a result, the surface atom to bulk atom ratio far greater than that in a real material.That is why more Q 1 and Q 2 were found in the simulated structures.Besides, the effect of Si/Al ratio on the Si n sites distribution can be observed obviously.With lower Si/Al ratio, N-A-S-H gel structure has more Si 3 and Si 4 and less Si 1 and Si 2 , which means the structure is more crosslinked.This result has confirmed the fact that lower Si/Al ratios tends to form a 3D network (more Si 3 and Si 4 ), while higher Si/Al ratios prefer to a 2D crosslink structure (more Si 1 and Si 2 ) [24].The distribution of Al sites within all N-A-S-H gel structures is shown in Fig. 5. Al 4 and Al 5 are two main Al sites for all the simulated structures with different Si/Al ratios.It is generally accepted that Al always stays tetrahedrally coordinated [21].Al 4 is the main existing form based on NMR results [5,21].Pentacoordinate Al and six coordinated Al were once believed to only come from the unreacted raw materials in geopolymer pastes [5,19] until Walkley proposed a N-A-S-H model containing six coordinated Al for the first time [22].Actually, pentacoordinate Al and six coordinated Al have been found as the charge-balance roles in aluminosilicate glass system in many early reports [25,26].The presence of Al 5 is also detected in some N-A-S-H structures built by molecular dynamics [27,28].These tetrahedral Al and non-tetrahedral Al can explain why the O-Al-O bond angle has a wide range from 60°-180°.
Fig. 1
Fig. 1 Evolution of Q n sites for (a) Si sites and (b) Al sites (only shown a Si/Al ratio at 1.0) Fig. 2 (a) and (b) show the distribution of O-Si-O bond angle and O-Al-O bond angle for all N-A-S-H gel structures, respectively.The distribution of O-Si-O bond angle has a main peak at around 110° regardless of the Si/Al ratios.This indicates all Si in N-A-S-H gel are tetrahedral.For O-Al-O bond angle, a much wider range can be observed in Fig. 2(b).More specifically, the distribution of O-Al-O bond angle has two peaks at around 95° and 150°, respectively.These two peaks indicate Al environment in N-A-S-H gel is not as single as Si environment, which will be further explained.
Fig. 3
Fig. 3 Simulated XRD patterns for all N-A-S-H models with different Si/Al ratios
Fig. 4
Fig. 4 Distribution of Si n sites within all N-A-S-H models with different Si/Al ratios
Fig. 5
Fig. 5 Distribution of Al n sites within all N-A-S-H models with different Si/Al ratios
Table 2 Average bond length of N-A-S-H model compared to other work
Besides bond length, bond angle is another key parameter to reveal the details of N-A-S-H gel structure.
International RILEM conference on Microstructure Related Durability of Cementitious Composites (Microdurability2020) 4th | 2,707.6 | 2022-06-09T00:00:00.000 | [
"Materials Science"
] |
UTX and UTY Demonstrate Histone Demethylase-Independent Function in Mouse Embryonic Development
UTX (KDM6A) and UTY are homologous X and Y chromosome members of the Histone H3 Lysine 27 (H3K27) demethylase gene family. UTX can demethylate H3K27; however, in vitro assays suggest that human UTY has lost enzymatic activity due to sequence divergence. We produced mouse mutations in both Utx and Uty. Homozygous Utx mutant female embryos are mid-gestational lethal with defects in neural tube, yolk sac, and cardiac development. We demonstrate that mouse UTY is devoid of in vivo demethylase activity, so hemizygous XUtx− Y+ mutant male embryos should phenocopy homozygous XUtx− XUtx− females. However, XUtx− Y+ mutant male embryos develop to term; although runted, approximately 25% survive postnatally reaching adulthood. Hemizygous X+ YUty− mutant males are viable. In contrast, compound hemizygous XUtx− YUty− males phenocopy homozygous XUtx− XUtx− females. Therefore, despite divergence of UTX and UTY in catalyzing H3K27 demethylation, they maintain functional redundancy during embryonic development. Our data suggest that UTX and UTY are able to regulate gene activity through demethylase independent mechanisms. We conclude that UTX H3K27 demethylation is non-essential for embryonic viability.
Introduction
Post-translational modifications of histones establish and maintain active or repressive chromatin states throughout cell lineages. Thus, the enzymes that catalyze these modifications often have crucial roles in establishing genomic transcriptional states in developmental decision-making. Histone methylation can stimulate gene activation or repression depending on which residues are targeted. Methylation of histone H3 on Lysine 4 (H3K4me) is an active chromatin modification, while methylation on histone H3 Lysine 27 (H3K27me) is associated with repression of gene activity [1].
Utx and Uty are genetically amenable to delineate H3K27me3 demethylation dependent versus demethylation independent function in mouse development. Comparative amino acid sequence analysis of UTX and UTY reveals 88% sequence similarity in humans (83% identity) and 82% sequence similarity in mouse. Across the annotated JmjC histone demethylase domain, the similarity is at 98% and 97% for human and mouse respectively. In the TPR (tetratricopeptide repeat) domain, the similarity is at 94%. So while UTY is reported to have lost H3K27 demethylase activity, it is remarkably well conserved with respect to UTX. Recent discoveries have revealed that JMJD3 functions in macrophage lipopolysaccharide response and lymphocyte Th1 response through H3K27 demethylase independent gene regulation [36,37], suggesting that function of this family of proteins is not limited to histone demethylation. It has been hypothesized that X and Y chromosome homologs will escape X-inactivation in instances where the Y homolog has not lost functional activity and male to female dosage remains balanced [38]. Therefore, it is possible that UTX and UTY have functional overlap in H3K27 demethylase independent gene regulatory processes.
A recent publication by Lee et al. characterized heart defects in Utx homozygous embryos [39]. Cell culture experiments suggested that the phenotype resulted from H3K27 demethylase activity. Utx hemizygotes were reported to have a wide range of abnormalities, but it was not clear if any phenotypes overlap with the Utx homozygotes as no comparative data were illustrated. Given that Uty remained intact in these studies, it was not possible to conclude definitively whether Utx demethylase activity was essential for early embryonic development. Furthermore, it is not known whether mouse UTY is capable of H3K27 demethylation. The classification of UTY as having no demethylase activity is based on in vitro assays only. The possibility of in vivo demethylase activity due to other co-factors remains a possibility. Also, mouse UTY has considerable sequence divergence from human UTY. The two proteins are 75% identical overall, and 95% identical in the JmjC demethylase domain. Thus, it is possible that mouse UTY has retained demethylase activity.
In our study, we have generated mouse mutations in both Utx and Uty. Hemizygous Utx mutant male mice (X Utx2 Y + ) were runted at birth with only a small number surviving to adulthood. In contrast, Utx homozygous females (X Utx2 X Utx2 ) had severe phenotypes mid-gestation, with developmental delay, neural tube closure, yolk sac, and heart defects. Unlike homozygotes, Utx hemizygotes lack mid-gestational cardiovascular defects and are recovered in Mendelian frequencies at E18.5. Furthermore, compound hemizygous male embryos (X Utx2 Y Uty2 ) carrying mutations of both Utx and Uty phenocopy the Utx homozygotes. Thus, the disparity in hemizygous and homozygous Utx phenotypes is due to compensation by Uty in the hemizygous male embryos. We have utilized an in vivo H3K27 demethylation assay to demonstrate that mouse UTY is not capable of H3K27 demethylation. Additionally, cell culture data indicate UTX and UTY may function in gene activation as both proteins associate with the H3K4 methyl-transferase complex, the BRG1 chromatin remodeler, as well as heart transcription factors. Our results implicate a crucial H3K27 demethylase independent function for UTX and UTY in mouse embryonic development. This is the first ascribed function for UTY, and the first example of developmental redundancy for X and Y chromosome homologous genes. Notably, our data suggest the H3K27 demethylase activity of UTX is not essential for embryonic viability.
Results
Hemizygous Utx mutant male mice have reduced perinatal viability We developed mutant mouse lines to assess the contribution of UTX H3K27 demethylase function in mouse development. Two alleles for Utx were obtained from public resources. The BayGenomics gene trap line Kdm6a Gt(RRA094)Byg is designated as X UtxGT1 ( Figure 1A). RT-PCR and PCR genotyping verified the identity of this allele in both ES cells and mutant mice (Figures S1 and S2A-S2C). Additionally, we obtained the EUCOMM Kdm6a knockout line (project 26585, Kdm6a tm1a(EUCOMM)Wtsi ), designated as X UtxGT2fl , which inserts a gene trap in intron 2 along with a floxed 3 rd exon ( Figure 1A). Southern blotting and PCR genotyping verified the identity of this allele (Figures S1 and S2D-S2F). Notably, quantitative RT-PCR comparison of tail RNA from X UtxGT1 Y Uty+ versus X UtxGT2fl Y Uty+ mice demonstrated that Utx gene trap 1 is more effective than gene trap 2 (a 96% reduction compared to a 61% reduction in Figures S2C and S2F). Because X UtxGT2fl demonstrated incomplete trapping, the 3 rd exon was deleted with Cre recombinase to establish X UtxGT2D (containing both the gene trap and deleted 3 rd exon, Figure 1A). Deletion of the third Utx exon produces a frameshift and introduction of a translational stop codon when Utx is spliced from exon 2 to exon 4. X UtxGT1 and X UtxD are null alleles as UTX protein was eliminated in western blotting of these embryonic lysates ( Figure 1B, 1C). Consistent with RT-PCR data, X UtxGT2fl exhibits a reduction but not absence of UTX protein ( Figure 1D).
Heterozygous Utx female mice were crossed to wild type male mice to produce hemizygous Utx mutant males. At weaning, the hemizygous X UtxGT1 Y Uty+ , X UtxGT2D Y Uty+ , and X UtxGT2fl Y Uty+ mice all exhibited reductions of 68%, 83%, and 55% respectively from the expected genotype frequencies based on these crosses, yet expected genotype frequencies were observed at embryonic day E18.5 (Table 1). At E18.5, most of the hemizygous Utx males appeared phenotypically normal; however a small percentage of the fetuses exhibited exencephaly. At birth, the hemizygous Utx males were small and exhibited a failure to thrive phenotype. Those males that survived through this phenocritical phase reached adulthood and were fertile. Hemizygous Utx mutant males were runted compared to wild type littermates and remained smaller than controls throughout their lifespan (Figure 2A, 2B). Backcross of the Utx allele onto a C57BL/6J or 129/SvJ background affected postnatal viability, but hemizygous Utx male embryos were still readily obtained at E18.5 (Table S1).
Author Summary
Trimethylation at Lysine 27 of histone H3 (H3K27me3) establishes a repressive chromatin state in silencing an array of crucial developmental genes. Polycomb repressive complex 2 (PRC2) catalyzes this precise posttranslational modification and is required in several critical aspects of development including Hox gene repression, gastrulation, X-chromosome inactivation, mono-allelic gene expression and imprinting, stem cell maintenance, and oncogenesis. Removal of H3K27 trimethylation has been proposed to be a mechanistic switch to activate large sets of genes in differentiating cells. Mouse Utx is an X-linked H3K27 demethylase that is essential for embryonic development. We now demonstrate that Uty, the Y-chromosome homolog of Utx, has overlapping redundancy with Utx in embryonic development. Mouse UTY has a polymorphism in the JmjC demethylase domain that renders the protein incapable of H3K27 demethylation. Therefore, the overlapping function of UTX and UTY in embryonic development is due to H3K27 demethylase independent mechanism. Moreover, the presence of UTY allows UTX-deficient mouse embryos to survive until birth. Thus, UTX H3K27 demethylation is not essential for embryonic viability. These intriguing results raise new questions on how H3K27me3 repression is removed in the early embryo. Additionally, the gene trap of X UtxGT2fl was excised with Flp recombinase to create a standard floxed exon 3 (X Utxfl ) and Cre recombination created X UtxD . (B) Western blotting of E18.5 liver demonstrates a complete loss of UTX in X UtxGT1 Y Uty+ lysates. RbBP5 was used as a loading control. (C) Western blotting of E10.5 whole embryo demonstrates a complete loss of UTX in X UtxD Y Uty+ and X UtxD X UtxD lysates. RbBP5 was used as a loading control. (D) Western blotting of E12.5 primary MEFs demonstrates a reduction of UTX in X UtxGT2fl Y Uty+ and X UtxGT2fl X UtxGT2fl lysates. RbBP5 was used as a loading control. doi:10.1371/journal.pgen.1002964.g001 Homozygous Utx females are mid-gestational embryonic lethal Human UTY lacks demethylase activity based on in vitro assays, so we hypothesized that X Utx2 X Utx2 homozygous females will phenocopy X Utx2 Y Uty+ hemizygous males in demethylase dependent function (UTX specific), but may demonstrate a more severe phenotype in demethylase independent roles. Homozygous X UtxGT1 X UtxGT1 and X UtxGT2D X UtxGT2D females were never observed at weaning or embryonic day E18.5 (Table 1), but were observed at expected genotype frequencies at E10.5. However, these embryos were dead and resorbed by E12.5 (Table 1). Notably, at E10.5 all homozygous X UtxGT1 X UtxGT1 and X UtxGT2D X UtxGT2D females were smaller in size and had open neural tubes in the midbrain region ( Figure 3A-ii, iii, vi, vii). Variation in severity of the Utx homozygous phenotypes was observed in mutant embryos, ranging from medium sized with typical E10.5 features ( Figure 3A-ii, vi) to much smaller embryos resembling the E9.5 timepoint ( Figure 3A-iii, vii). The X UtxGT1 and X UtxGT2D alleles failed to complement, as trans-heterozygous X UtxGT1 X UtxGT2D female embryos resembled individual homozygous alleles ( Figure 3A-viii). Hemizygous X UtxGT1 Y Uty+ male embryos appeared phenotypically normal at E10.5 ( Figure 3A-iv). Homozygous X UtxGT2fl X UtxGT2fl females exhibited a slight reduction in phenotypic severity; about half of the mutant embryos had open neural tubes and some survival to E12.5 (Table 1). To distinguish between embryonic and extraembryonic contribution of UTX towards the homozygous phenotype, we crossed the Sox2Cre transgene into the Utx fl background. In this cross, paternally inherited Sox2Cre expression will drive Utx deletion specifically in embryonic tissue [40]. No X Utxfl X Utxfl , Sox2Cre female embryos were recovered at E18.5, whereas X Utxfl Y Uty+ , Sox2Cre male embryos were recovered at expected frequencies (Table S2). At E10.5, X Utxfl X Utxfl , Sox2Cre embryos produced phenotypes largely identical to Utx homozygotes. In summary, Utx homozygous females demonstrate a significantly more severe embryonic phenotype in comparison to Utx hemizygous males. Mid-gestational lethality is typically associated with defective cardiovascular development. Accordingly, we observed both heart and yolk sac vasculature/hematopoietic phenotypes in Utx homozygotes. Utx homozygous mutant hearts were small and underdeveloped, and more severe embryos exhibited peri-cardial edema ( Figure 3A-ii, iii, vi, vii). The yolk sac vasculature of Utx homozygotes was pale with a reduction in the amount of vascular blood ( Figure 3B-ii). In more severe examples, homozygous yolk sacs were completely pale with an unremodeled vascular plexus ( Figure 3B-iii). Thus, abnormal cardiovascular function may be a source of lethality and developmental delay in Utx homozygous mutant embryos.
UTX and UTY have redundant function in embryonic development
The most likely explanation for the disparity between Utx hemizygotes and homozygotes is that UTY can compensate for the loss of UTX in embryonic development. We tested Utx and Uty expression in embryonic development to assess any overlap in expression patterns. Utx expression was initially gauged utilizing the B-galactosidase reporter in X Utx+ X UtxGT1 and X UtxGT1 X UtxGT1 whole mount E10.5 embryos. Utx was expressed at lower levels throughout the E10.5 embryo with a particular enrichment in the neural tube and otic placode ( Figure S3A-ii, iii, iv). In situ hybridization for both Utx and Uty demonstrated similar expression patterns characterized by widespread low-level expression with particular enrichment in the neural tube ( Figure S3B-ii, iii, v, vi). Our analysis of publicly available RNA-seq data sets [41,42] revealed similar low-levels of expression for Utx and Uty. , homozygous female E10.5 X UtxGT1 X UtxGT1 (A-ii) and X UtxGT2D X UtxGT2D (A-vi) embryos have some developmental delay including smaller size, underdeveloped hearts (white arrows), and open neural tube in the head (arrowheads). More severe embryos resemble the size and features of E9.5 embryos with cardiac abnormalities and peri-cardial edema (A-iii, vii, red arrows). Hemizygous male X UtxGT1 Y Uty+ embryos appear phenotypically normal at this stage (A-iv). The X UtxGT1 and X UtxGT2D alleles fail to complement as female X UtxGT1 X UtxGT2D embryos have identical phenotypes to homozygotes (A-viii). (B) At E10.5, homozygous X UtxGT1 X UtxGT1 female embryos exhibit either normal yolk sac vasculature with a reduction in red blood cells (B-ii) or have a completely pale yolk sac with unremodeled vascular plexus (B-iii). doi:10.1371/journal.pgen.1002964.g003 To determine whether UTY can compensate for the loss of UTX, we obtained the Welcome Trust Sanger Institute gene trap line Uty Gt(XS0378)Wtsi , designated as Y UtyGT ( Figure 4A). This line, inserted in intron 4, traps the Uty transcript in a similar position of the coding sequence as the Utx alleles (compare to Figure 1A). This gene trap line was verified by RT-PCR in ES cells and subsequent mice (Figures S1 and S2G), and it achieved a 99% reduction in Uty expression from X Utx+ Y UtyGT mouse tail RNA ( Figure 4B). Hemizygous Uty mutant males, X Utx+ Y UtyGT , were viable and fertile (Table 1). However, no compound hemizygous X UtxGT1 Y UtyGT and X UtxGT2D Y UtyGT embryos were recovered at E18.5 (Table 1). At E10.5, expected genotype frequencies of X UtxGT1 Y UtyGT and X UtxGT2D Y UtyGT males were observed, but these embryos phenocopied the developmental delay, neural tube closure, cardiac, and yolk sac defects observed in Utx homozygous embryos ( Figure 4C-iii, iv).
UTX and UTY redundancy is essential for progression of cardiac development
We performed a more detailed phenotypic assessment of Utx and Uty mutant hearts to scrutinize the extent of phenotypic overlap between X Utx2 Y Uty+ , X Utx2 X Utx2 , and X Utx2 Y Uty2 embryos. Analysis of cardiac development in similar sized E10.5 embryos ( Figure 5A-i, ii, iii, iv) revealed that Utx homozygotes and Utx/Uty compound hemizygotes failed to complete heart looping ( Figure 5A-vi, viii), whereas Utx heterozygotes and hemizygotes were phenotypically normal ( Figure 5A-v, vii). Additionally, homozygotes and compound hemizygotes had smaller hearts with a lack of constriction between the left and right ventricles. Sectioning of E10.5 hearts confirmed that Utx homozygotes and Utx/Uty compound hemizygotes have small hearts with a reduction in ventricular myocardial trabeculation and little or no initiation of interventricular septum formation ( Figure 5B-ii, iv). The outer ventricular wall of these embryos is much thinner, and the overall number of cardiomyocytes and myocardial structure is severely deficient ( Figure 5C-ii, iv). In summary, while midgestational hearts appear normal in X Utx2 Y Uty+ hemizygous males, X Utx2 X Utx2 homozygous females and X Utx2 Y Uty2 compound hemizygous males display identical deficiencies in cardiac development. Therefore, UTY compensates for the loss of UTX in hemizygous Utx mutant males, rescuing mid-gestational cardiac phenotypes. Mouse and human UTY are incapable of H3K27 demethylation in vivo UTX and UTY have redundant function in embryonic development, but it is not known whether mouse UTY is capable of H3K27 demethylation. Two independent publications demonstrated that human UTY has no catalytic activity in H3K27 demethylation in vitro [26,29]. It is possible that human UTY (and not mouse UTY) has accumulated a specific polymorphism rendering it demethylase deficient. Additionally, in vitro assays remove UTY from its natural cellular context and may lack cofactors required to promote H3K27 demethylation. Therefore, we utilized an intracellular, in vivo demethylation assay, whereby HEK293T cells transiently over-expressing the UTX carboxyterminus (encoding the JmjC and surrounding domains essential for proper structure and function) exhibit a reduction in H3K27me3 immunofluorescence levels [43]. In our assay, wild type and mutant constructs were expressed at similar levels ( Figure S4A), and individual cells expressing similar, medium-high expression levels of each construct were selected for analysis ( Figure 6). Expression of Flag-tagged human and mouse UTX demethylated H3K27me3 and H3K27me2, while a mutation known to disrupt activity (H1146A) was unable to demethylate H3K27 ( Figure 6A and 6B, Figure S5A). Human UTX expression had no effect on other histone modifications we tested, such as H3K4me2 ( Figure S5B). In contrast, neither human nor mouse UTY were capable of demethylating H3K27me3 and H3K27me2 ( Figure 6A and Figure S5A). Cells expressing medium-to-high levels of UTY (N.100) never exhibited a reduction in H3K27me3 levels relative to nearby untransfected controls.
Our previous structural analysis of human UTX [43], combined with sequence alignments ( Figure 6C and Figure S6), suggested several amino acid substitutions in human and mouse UTY sequences might make them catalytically inactive. We introduced these mutations into the human UTX C-terminal fragment (Y1135C, T1141I, SNR1025NKS, G1172D/G1191S, I1267P, I1267V, and H1329P), and examined their effects on the in vivo demethylation activity. Of all the mutations tested, only the Y1135C and T1143I mutations completely abolished the ability of UTX to demethylate H3K27 ( Figure 6B, 6D). Complete loss of activity was similarly caused by mutations of the corresponding residues in JMJD3 (Y1377C and T1385I, Figure 6D). All qualitative data was also confirmed by immunofluorescence quantification ( Figure S4B). Y1135 is conserved throughout all H3K27 demethylases ( Figure S7), and in the crystal structure [43], it interacts with two of the three methyl groups of the H3K27me3 side chain, as well as N-oxalylglycine (NOG; an analog of the cofactor alpha-ketoglutarate) ( Figure 6E). The smaller C947 side chain of mouse UTY would not effectively maintain either interaction. T1143 is conserved throughout H3K27, H3K9, and H3K36 demethylases ( Figure S8), and also interacts with NOG ( Figure 6E). Its replacement with bulky isoleucine not only removes the hydroxyl group for interaction with alpha-ketoglutarate, but also may sterically hinder its binding. These observations are consistent with the fact that no H3K27 demethylation activity has been detected for mouse UTY, and we therefore conclude that the catalytic domain of mouse UTY has crucial amino acid replacements that render the protein incapable of H3K27 demethylation. On the other hand, we failed to identify why human UTY is catalytically inactive. Notably, restoring the 2 crucial mouse UTY polymorphisms (M-UTY C947Y, I955T) failed to recover H3K27 demethylase activity ( Figure 6B). These data suggest that unidentified structural elements in the UTY Cterminal region are also responsible for the lack of H3K27 demethylase activity.
UTX and UTY associate in common protein complexes and are capable of H3K27 demethylase independent gene regulation Although human and mouse UTY have lost the ability to demethylate H3K27, they retain considerable sequence similarity with UTX, suggesting a conserved function. To gain more insight into the overlap in UTX and UTY activities, we performed a biochemical analysis of tagged constructs to determine if UTX and UTY can associate in common protein complexes. Co-transfection of Flag tagged UTX or UTY with HA-UTX followed by immunoprecipitation demonstrates that UTX can form a multimeric complex with itself and UTY ( Figure 7A). UTX associates with a H3K4 methyl-transferase complex containing MLL3, MLL4, PTIP, ASH2L, RBBP5, PA-1, and WDR5 [23,44]. To examine incorporation into this complex, we performed immunoprecipitations with Flag tagged UTX and UTY constructs. Both UTX and UTY were capable of associating with RBBP5 ( Figure 7B). Thus, UTX and UTY are incorporated into common protein complexes.
To identify common gene targets of UTX and UTY mediated regulation we generated E10.5 mouse embryonic fibroblast (MEF) cell lines containing mutations in Utx and Uty (alleles X UtxGT2D and Y UtyGT ). The gene traps in these MEFs efficiently trapped Utx and Uty transcripts ( Figure 7C). These MEFs did not demonstrate differences in levels of global H3K27me3 ( Figure S9A). Genomewide UTX promoter occupancy has been mapped in fibroblasts [30]. Therefore, we screened our mutant MEFs for misregulated genes affected by the loss of both Utx and Uty that had been documented as direct UTX targets. The FNBP1 promoter is bound by UTX [30]. We verified UTX and UTY binding to the Fnbp1 promoter by ChIP ( Figure S9B and S9C). Fnbp1 expression was reduced to 68% of WT levels in X Utx2 Y Uty+ MEFs, but was further compromised to 42% in X Utx2 X Utx2 lines and 48% in X Utx2 Y Uty2 MEFs in which all Utx and Uty activity was lost ( Figure 7C). Analysis of E12.5 MEFs of a secondary allele (X UtxGT2fl ) also demonstrated diminished Fnbp1 expression in both Figure S6) and include the corresponding regions in mouse UTX. Transfected cells (white arrows) over-expressing H-UTX and M-UTX (Flag immunofluorescence, green pseudocolor) exhibited global loss of H3K27me3 immunofluorescence (red pseudo-color). Cells transfected with H-UTY and M-UTY C-terminal constructs did not demethylate H3K27me3. (B) H3K27me3 demethylase assay of UTX and UTY mutant constructs. H-UTX H1146A contains a point mutation in a residue that was previously reported as defective in H3K27 demethylation. Cells expressing H-UTX H1146A had no loss of H3K27me3. Mouse UTY has a Y to C amino acid change that corresponds to position 1135 in human UTX. This UTX residue is predicted to regulate H3K27me3 binding and demethylation. Expression of H-UTX Y1135C failed to demethylate H3K27me3. Mouse UTY also has a T to I amino acid change that corresponds to position 1143 in human UTX that is predicted to regulate binding of ketoglutarate in the demethylase reaction. Expression of H-UTX T1143I failed to demethylate H3K27me3. Correction of these two altered residues in mouse Uty (M-UTY-C947Y, I955T) failed to recover H3K27me3 demethylase activity. (C) Alignment of the JmjC domains of human/mouse UTX human UTY, mouse UTY, and human/mouse JMJD3. UTY non-conservative substitutions are indicated by white boxes and residues of interest are labeled with red asterisks. The UTX mutations that were analyzed are listed above the alignment, while JMJD3 mutations are listed below the alignment. (D) HEK293T cells were transfected with C-terminal UTX and UTY constructs or full-length mouse JMJD3 constructs carrying various AA substitutions. Medium-high expressing cells (N$100 cells scored for each experiment) were scored for any visible reduction in H3K27me3 levels relative to nearby untransfected cells. Flag vector transfection was used as a negative control for immunoprecipitation. (C) Fnbp1, a gene targeted directly by UTX, has intermediate downregulation in X Utx2 Y Uty+ MEFs (68% of WT, t-test p-value = 0.002), but was further compromised in X Utx2 X Utx2 (42% of WT, t-test p-value relative to X Utx2 Y Uty+ = 0.001) and X Utx2 Y Uty2 (48% of WT, t-test p-value relative to X Utx2 Y Uty+ = 0.02, N.4 independent MEF lines per genotype) MEFs. MEFs X Utx2 X Utx2 and X Utx2 Y Uty2 MEFs ( Figure 7D). Therefore, Fnbp1 expression is positively regulated by both UTX and UTY.
To examine the role of UTX and UTY in Fnbp1 regulation, we performed H3K27me3 ChIP on E12.5 X Utx+ Y Uty+ or X Utx2 X Utx2 MEFs ( Figure 7E). Quantitative PCR for an intergenic region served as a negative control, while HoxB1 served as a positive control for H3K27me3. Quantitative PCR demonstrated that the Fnbp1 promoter has relatively low levels of H3K27me3 with no additional accumulation in X Utx2 X Utx2 MEFs ( Figure 7E). Alternatively, H3K4me3 significantly accumulated at the Fnbp1 promoter ( Figure 7F). Notably, a loss of Fnbp1 H3K4me3 was observed in X Utx2 X Utx2 MEFs ( Figure 7F). Therefore, UTX and UTY appear to function in Fnbp1 activation by regulating promoter H3K4 methylation rather than H3K27 demethylation.
UTX and UTY can both associate with heart transcription factors to regulate downstream target genes
It has been documented that UTX can associate with heart transcription factors and with the SWI/SNF chromatin remodeler, BRG1 [39]. It has been hypothesized that UTX association with these factors mediates H3K27 demethylase dependent and demethylase independent induction of the cardiomyocyte specification program. As UTX and UTY have redundant demethylase independent function in embryonic development, we examined whether UTY can also associate with these proteins. Co-transfection of Myc-UTY with Flag-BRG1 followed by immunoprecipitation demonstrated that UTY associates with BRG1 ( Figure 8A). Myc-UTY also coimmunoprecipitated with Flag-NKX2-5, Flag-TBX5, and Flag-SRF ( Figure 8B and Figure S10A). Thus, UTY can form the same protein complexes as UTX with respect to BRG1 and heart transcription factors.
To examine function of UTY in directing activation of downstream heart transcription factor targets, we assessed the regulation of one previously characterized target, atrial natriuretic factor (ANF) [39]. Co-transfection of NKX2-5 with a ANF promoter-Luciferase reporter construct demonstrated a significant upregulation in expression off the ANF promoter ( Figure 8C). The reporter expression was significantly enhanced when NKX2-5 was co-transfected with UTY ( Figure 8C). The level of ANF reporter transcriptional enhancement was relatively weaker with UTY as compared to UTX, but this is most likely due to a reduction in the transfection efficiency of full-length UTY relative to UTX (as demonstrated in Figure 7A and 7B). UTY also significantly enhanced the ANF reporter response to TBX5 ( Figure S10B). Finally, ANF expression was significantly affected in the hearts of only X Utx2 X Utx2 and X Utx2 Y Uty2 embryos (with 52% and 57% level of expression respective to X Utx+ Y Uty+ controls, Figure 8D). X Utx2 Y Uty+ hemizygotes only had a moderate loss of ANF expression (76% expression level respective to X Utx+ Y Uty+ ) that was not statistically significant from wild type controls due to the variability in ANF expression. In summary, both UTX and UTY can associate with heart transcription factors to modulate expression of downstream targets.
Discussion
We have undertaken a rigorous genetic analysis contrasting UTX and UTY function in mouse embryonic development. In alignment with current literature, Utx homozygous females are lethal in mid-gestation with a block in cardiac development [39]. We now demonstrate that Utx hemizygous mutant males are viable at late embryonic timepoints in expected Mendelian frequencies. In fact, approximately 25% are capable of reaching adulthood. Our comprehensive phenotypic analysis of Utx hemizygous males illustrates that these embryos are phenotypically normal at mid-gestation and lack the cardiovascular dysfunction of Utx homozygous females. This stark phenotypic disparity suggests that UTY may compensate for the loss of UTX in the male embryo. Compound hemizygous Utx/Uty mutant male embryos phenocopy the cardiovascular and gross developmental delay of homozygous females, proving that UTX and UTY have redundant function in embryonic development. As we have demonstrated that mouse UTY lacks H3K27 demethylase activity in vivo, the overlap in embryonic UTX and UTY function is due to H3K27 demethylase independent activity. Given the widespread developmental delay and pleiotropy, it is difficult to assess the primary defect and tissue(s) responsible for UTX and UTY redundancy. The presence of functional UTY in Utx hemizygous males is not capable of preventing peri-natal runting and lethality, suggesting that UTX and UTY are not completely overlapping in activity. These later phenotypes could be due to H3K27 demethylase dependent activity of UTX. Furthermore, the lack of phenotype in Uty hemizygotes demonstrates the absence of any essential UTY specific function in mouse development.
The UTY Jumonji-C domain has maintained high conservation in the absence of catalytic H3K27 demethylase activity. JMJD3 mediated regulation of lymphocyte Th1 response requires an intact Jumonji-C domain, but is also not dependent on H3K27 demethylation [37]. Therefore, this domain may be an essential structural protein component, a protein binding domain, or a domain that may demethylate non-histone substrates. UTX and UTY can associate in a common protein complex and can both interact with RBBP5 of the H3K4 methyl-transferase complex. UTX, UTY, and JMJD3 all associate with H3K4 methyltransferase complexes from multiple mouse and human cell types [23,25,44,45]. The Fnbp1 promoter is bound by UTX, and gene expression is positively regulated by both UTX and UTY in MEFs. Based on our histone profiling at this locus, UTX and UTY affect the deposition of H3K4 methylation, not H3K27me3 demethylation. Therefore, the common UTX/UTY pathway in embryonic development may involve gene activation rather than removal of gene repression. JMJD3 has been linked more directly to transcriptional activation as the protein complexes with and facilitates factors involved in transcriptional elongation [46]. One cardiac target of UTX regulation, atrial natriuretic factor (ANF), was misregulated in ES cell differentiation [39]. Cell culture experiments suggest that ANF may be a target of both H3K27 demethylase dependent and demethylase independent regulation; however, this study could not distinguish UTX versus UTY were generated from the X UtxGT2D and Y UtyGT alleles. (D) Fnbp1 is similarly mis-expressed in X UtxGT2fl allelic combinations of E12.5 MEFs. X Utx2 X Utx2 and X Utx2 Y Uty2 MEFs significantly differ from X Utx2 Y Uty+ MEFs (t-test p-value = 0.05 and 0.02 respectively, N.4 independent MEF lines per genotype). (E) H3K27me3 ChIP was performed on E12.5 X Utx+ Y Uty+ control (green) and X Utx2 X Utx2 (red) MEFs. An IgG antibody control is indicated in grey. Quantitative PCR for the ChIP was performed over a negative control region (an intergenic region) as well as a positive control (HoxB1). Fnbp1 failed to accumulate H3K27me3 in X Utx2 X Utx2 MEFs (t-test p-value = 0.5, N = 4 independent MEF lines per genotype). (F) H3K4me3 ChIP was performed on E12.5 X Utx+ Y Uty+ control (green) and X Utx2 X Utx2 (red) MEFs. An IgG antibody control is indicated in grey. Quantitative PCR for the ChIP was performed over a negative control region (intergenic region) as well as a positive control (Npm1). The WT Fnbp1 promoter exhibited significant H3K4me3 accumulation, which was reduced in X Utx2 X Utx2 MEFs (t-test p-value = 0.005, N = 3 independent MEF lines per genotype). doi:10.1371/journal.pgen.1002964.g007 Anf expression was analyzed from E10.5 heart RT-PCR of various X UtxGT2D and Y UtyGT allelic combinations. A moderate, but function in ES cell differentiation. Both UTX and UTY affect the transcriptional response of an exogenous ANF reporter in the presence of heart specific transcription factors, suggesting that UTX and UTY can operate more directly by aiding in transcriptional activation of this gene rather than altering chromatin structure. Consistently, ANF expression was affected in X Utx2 X Utx2 and X Utx2 Y Uty2 embryonic hearts. UTX and UTY can both associate with the SWI/SNF chromatin remodeler BRG1, which has been hypothesized to mediate histone demethylase independent gene regulation, but the relevance and mechanism of this interaction is not known. Drosophila UTX associates with BRM (orthologous to BRG1) and CBP (a H3K27 acetyl-transferase), and the coupling of H3K27 demethylation with H3K27 acetylation may be essential for switching from a silent to active state [47].
Female cells are subject to gene silencing of one X-chromosome (X-inactivation) to balance gene dosage with males. Theory on establishing X-inactivation for X-Y chromosome homologs hypothesizes that the initial entry step is loss of function or expression of the Y homolog to create dosage imbalance [38]. This prediction also dictates that conservation of X-Y homolog function will maintain gene dosage between sexes, and the female Xhomolog will not experience pressure to inactivate. Utx and Uty represent a unique paradox to this untested theory; UTY has lost demethylation activity yet Utx escapes X-inactivation. We now demonstrate that UTX and UTY have retained embryonic redundancy, verifying the presumed correlations between Xinactivation escape and functional dosage balance. Zfx, Sox3, and Amelx represent unbalanced X-chromosome genes; they have similar hemizygous and homozygous mutant phenotypes indicating that the Y chromosome homologs have lost redundant function [48,49,50,51,52,53,54]. Zfx and Sox3 are inactivated, while the Amelx inactivation status is unknown [55,56]. Of all mouse X and Y chromosome homologs, only Utx, Kdm5c, and Eif2s3x are known to escape X-chromosome inactivation [28,56,57,58]. Interestingly, both KDM5C (SMCX) and its Y chromosome homolog, KDM5D (SMCY) have retained catalytic activity in demethylation of H3K4 di and tri-methyl residues [59,60,61,62]. In contrast to Utx X-chromosome escape driven by demethylation independent redundancy, Kdm5c may escape inactivation due to demethylation dependent redundancy. Our study is the first to demonstrate that an X-Y homologous pair that escapes X inactivation maintains functional conservation, and this escape may stem from an evolutionary benefit to maintain UTY demethylation independent function.
H3K27 demethylases are hypothesized to function in early developmental activation of ''bivalent'' PRC2 targets by coordinating H3K27 demethylation with H3K4 methylation. The H3K27 demethylation dependent phenotype (UTX specific) of Utx hemizygotes is not apparent until birth. The UTX H3K27 demethylase activity is dispensable for function in C. elegans [63]. Remarkably, the mammalian embryo, having numerous examples of H3K27me3 repression in early development, can survive to term without UTX histone demethylation. It is possible that there is further redundancy between UTX and JMJD3. JMJD3 mutant mice are not well characterized, but have been reported to be perinatal lethal with distinct features in comparison to Utx hemizygotes [32]. Therefore, it is likely that JMJD3 has distinct targets in development. Overall, the earliest H3K27 demethylation depen-dent phenotypes for all members of this gene family do not manifest until late embryonic development. This timepoint is much later than the converse early embryonic phenotypes from mutations in the H3K27 methyl-transferase complex [7,21,22]. Thus, there appears to be a lack of interplay between H3K27 methylation and demethylation in gene regulation, and the early embryonic removal of H3K27me3 from PRC2 mediated processes (such as ES cell differentiation, reactivation of the inactive Xchromosome, or establishing autosomal imprinting) may involve other mechanisms such as histone turnover or chromatin remodeling. H3K27 demethylases may certainly have crucial roles in the specification of progenitor cell populations of organ systems essential in peri-natal or postnatal viability, and genetic model systems will best assess the functional impact that H3K27 demethylation plays in these processes.
Luciferase assay
We received the ANF promoter-Luciferase reporter construct from Benoit Bruneau [69]. This construct was co-transfected in the presence of NKX2-5 or TBX5 with or without UTX or UTY. Luciferase activity was measured using the Promega Dual Luciferase Reporter Assay System on the Promega Glomax Multi Detection System. All readings were normalized to a Renilla Luciferase control that was co-transfected with all samples.
Mouse crosses
All mouse experimental procedures were approved by the University of North Carolina Institutional Animal Care and Use Committee. Utx homozygous data was generated either by crosses between Utx hemizygous males and Utx heterozygous females, or by crosses between X UtxGT2fl Y Uty+ VasaCre males and X Utx+ X UtxGT2D heterozygous females. X UtxGT2fl Y Uty+ VasaCre males were utilized because of an initial difficulty in generating X UtxGT2D Y Uty+ males and due to the efficient and specific activity of VasaCre in the male germline [70]. Utx hemizygous phenotypic data was developed from the previously mentioned homozygous crosses or through crosses between a WT male and heterozygous Utx female. Compound hemizygous Utx/Uty embryos were generated by crossing heterozygous Utx females with hemizygous Uty males. Embryos were PCR genotyped from yolk sac samples for Utx and were sexed by a PCR genotyping scheme to distinguish Utx from Uty. All primer sequences are available upon request.
Histology, in situ hybridization, and LacZ staining Histology samples, in situ hybridization, and LacZ staining were performed as described [71]. In situ hybridization probes were generated to be identical to previous literature [72]. Figure S1 Schematic of genotyping strategies for Utx and Uty alleles. (A) The X UtxGT1 allele was genotyped with a three-primer scheme spanning the insertion site in intron 3. (B) The X UtxGT2fl allele was verified by Southern blotting with an HpaI restriction digest. HpaI sites are noted as ''H'', and the 59 probe location is marked as a red box. The introduction of a novel HpaI site within the targeting cassette reduces the HpaI product from 17-Kb to 10-Kb. A three-primer scheme was designed for genotyping. Due to a deletion of intron 3 within the targeting vector, the product size of primers 1-2 will be larger in WT than in X UtxGT2fl , even with the introduction of the loxP site. Primers 3-2 will only amplify if Cre recombination takes place to delete exon 3. Table S1 Utx hemizygous genotype frequency on inbred backgrounds. Observed (Obs) and expected (Ex) frequencies of indicated genotypes (Geno) at embryonic (E) or postnatal (P) developmental stages with x 2 p-values (p-value) for the corresponding crosses to obtain each genotype. At E18.5 on the C57BL/6J background, 5 of the 8 observed X UtxGT1 Y Uty+ males were on the N8 generation. (DOC)
Table S2
Genotype frequencies of Sox2Cre driven Utx mutation. Observed (Obs) and expected (Ex) frequencies of indicated genotypes (Geno) at embryonic (E) or postnatal (P) developmental stages with x 2 p-values (p-value) for the corresponding crosses to obtain each genotype. (DOC) | 8,731.8 | 2012-09-01T00:00:00.000 | [
"Biology"
] |
Dynamic Behaviour of High Performance of Sand Surfaces Used in the Sports Industry
: The sand surface is considered a critical injury and performance contributing factor in different sports, from beach volleyball to greyhound racing. However, there is still a significant gap in understanding the dynamic behaviour of sport sand surfaces, particularly their vibration behaviour under impact loads. The purpose of this research was to introduce different measurement techniques to the study of sports sand surface dynamic behaviour. This study utilised an experimental drop test, accelerometry, in-situ moisture content and firmness data, to investigate the possible correlation between the sand surface and injuries. The analysis is underpinned by data gathered from greyhound racing and discussed where relevant.
Introduction
Sand surfacing is seen on different sports such as, beach volleyball [1], equine racing [2,3] and greyhound racing [4][5][6]. The mechanical properties of the sand surface not only determines the performance of an athlete, be they human or a tetrapod, but also as an important injury contributing factor [7,8]. There is still a significant gap in understanding the behaviour of sand surface under impact load [9,10]. Accordingly, understanding the mechanical properties of sand surface, variables that alter the sand surface dynamic behavior, and methods to measure these variables, are of paramount importance.
The characteristics of the sand are identified through the shape, size and percentage of the sand particles. The shape of the sand particles can vary from a 'very angular' to a 'well rounded' shape [11] and is a key influence on the dynamic behaviour of the sand [12]. There are two key variables used to classify sand particles, namely 'roundedness' and 'sphericity' [13]. Figure 1 provides a pictorial representation of the various sand particle shapes. As much as roundedness is desirable in terms of the impact attenuation properties, angularity is not. When the particles are very angular, they tend to pack tightly as the sharp corners interlock and will resist the movement of the particles when subjected to an impact. In contrast, well-rounded particles tend to smoothly transit, or flow, to different locations upon impact [14].
The amount of water retained on the sand (the sand moisture contents) and the compaction rate (sand density), also determine the sand surface dynamic behavior [15,16]. Accordingly, in a sport arena, where it is assumed the characteristics of the sand is controlled, the sand moisture content and density should be measured and compared against the safety benchmarks (The safety benchmark differs depending on the industry.) to avoid injuries. However, current safety benchmark, mainly those used in racing greyhound's arena, are not backed-up with science and research and are solely based on the experience of the track curators [2,17].
An example of investigation on the effect on the sand moisture content and density on sports arena is a work conducted by Holt et al. [3], where the effect of sand moisture levels and rates of compaction of two different drainage systems (Limestone gravel and permavoid TM drainage), on the dynamic performance of synthetic equestrian surfaces (93.84% sand, 5.15% fibre and 1.01% binding polymer) was studied. They used the Orono Biomechanical Surface Tester (OBST) [18], a 2.25 kg Clegg hammer [19] and a 30 kg traction device equipped with a horseshoe. The OBST, which simulates the collision of horse forelegs and the ground, was dropped four times on each surface for each treatment. The Clegg hammer was dropped four times based on the protocol recommended by ASTM Standard [20]. A 30 kg traction device, which was also used to measure the traction of the surface, was dropped once in four different locations of the test chamber for each treatment from a height of 200 mm.
Thiel et al. [1] mainly focused on the dry sand and designed a penetrometer to measure the stiffness of beach dry sand in-situ. To validate their method, their results were compared with that of a in-laboratory study, where the penetrometer test were conducted on a sand box [21]. It is claimed that their results are similar to the in-lab study and their method can be used to measure the stiffness coefficient of the sand, prior to a sporting event on the dry sand.
Force transducer, mainly the wearable sensor, are extensively used for gait analysis as they are cheap, easy to use, and user friendly [22]. Therefore, inertial inertial measurement units (IMU) have been used in different applications, mainly in clinical setting for gait analysis [22].
IMU technology can be also used to study the limb-surface interaction. In a recent studies conducted by the same author of this work, a single IMU was used to study the impact of different sports surfacing (grass vs wet sand) on the locomotion dynamics of galloping greyhounds [4][5][6]. Details of the most recent work [4] are discussed in the following sections. Worsey et al. [16], also used IMU technology (9 degrees of freedom (DOF) inertial-magnetic sensors, incorporating an 16 G-accelerometer, gyroscope, and magnetometer) to compare athletes running over three different surfacing (running track, hard sand, and soft sand). The purpose of this work was to provide more insight on a previously observed fact that athletes alter their gait mechanics to accommodate different running surfaces [23].
Mathematical modelling, mainly Spring-Loaded-Inverted-Pendulum (SLIP) models, firstly introduce by Blickhan et al. [24] are extensively used for gait analysis in different fields of science and engineering. SLIP models are simple and easy to interpret, yet provide substantial information about the under-studied subjects [24]. There are numerous off-the-shelf SLIP models, which one can modify based on their application. For instance, a SLIP model of a greyhound galloping on sport sand surfacing was modelled to study the effect of the sand surface with different moisture content levels and rate of compaction on the canine locomotion dynamics [25]. The results showed that small changes in sand surface mechanical properties can significantly affect the amount of force acting on the greyhound hind-leg which well correlates with the high rate of hind-leg severe injuries in this industry.
It was discussed previously that the ideal track surface should have enough impact attenuation properties to damp the initial impact shock, as well as providing enough traction for a stable gallop [26][27][28]. The surface with ideal mechanical properties has a low amount of energy loss and low impact acceleration (G max ) when the foot comes into contact with the surface. The low energy loss, would also increase the performance of the animal in the race [29].
The surface with high performance was associated with a higher risk of injuries. By contrast, the surface with impact attenuation properties tended to increase the muscular effort of the runner which affected their running performance [2].
The low density of the sand or the rates of compaction are also associated with the low rates of injury [30]. In practice, For sand sport surfacing, 'harrowing' is suggested, which can reduce the density or the rates of surface compaction [31]. However, a very low density surface may have a detrimental effect on locomotion efficiency as it affects the support needed for grip and propelling the body forward [2].
Surface traction is another variable identifying a safe surface composition. High traction will increase the bending moment applied to the bones, mainly the tarsal bones, and increase the risk of injuries [7]. However, not enough traction, usually seen in drier sands, will cause the surface not to sufficiently support the limb during the stance. Accordingly, as suggested by Holt et al. [3], increasing the moisture content of the sand while keeping its density low would result in a surface ideal in both race performance and injury reduction [3]. Overall, apart from acting as a supporting surface, the sand layer also acts as an energy absorbing layer to mitigate the impact shock. The optimal condition of the sand layer should have enough energy absorbing capacity (reflected as energy loss and contact time) while providing acceptable surface traction [32,33].
Contact time is another critical variable that affects the safety performance of the surface. The shorter the contact time, the higher the risk of injuries, because of an increased load rate to the musculoskeletal system [34,35]. Accordingly, this variable is considered as one of the primary safety thresholds in different applications such as playground surfacing tests [36,37].
The purpose of this work is to introduce methods to study the dynamic behaviour of athletic sand surfaces, with the aim of improving athletes performance while minimising the risk of injuries. The methods introduced here were originally designed for greyhound racing arenas but are adaptable to other sports such as horse racing [3], beach relay and sand volleyball [1].
A Drop Test to Study the Dynamic Behaviour of the Sport Sand Surfaces
It is discussed above that the sand characteristics contribute to the dynamic behaviour of the sand, mainly under impact loads. Below the sand particle sizes and percentages recommended for greyhound racing arena are given in Table 1.
The sample was taken from a typical greyhound racing arena. The sample was then oven dried for 24 h based on the AS 1289 Part 2.1.1 Standard [38]. As per the Standard, the sand sample should be heated up in an oven, between 105 to 110 • for 16 to 24 h.
The sample was then loaded on the sieve shaker. The procedure adopted for this test was following the AS 1289 Part 3.6.1 2009 Standard [38]. To do so a sieve shaker, model EFL 2000, was used. As per the Standard, the size of the sieve tray was selected from 4.75 mm to 75 µm. The procedure was done as such the EFL 2000 were not overloaded. In the case of any sieve being overloaded, the overloaded sieve sample was further sieved into two or more portions. The sieve shaker was set to shake for a time duration in between 5-10 min so that the sand was completely separated according to their sizes. The same procedure was repeated for 6 samples of soil. The calculations for generating the grading curve plots, which is based on AS 1289 Part 3.6.1 2009 Standard [38] is given below Equations (1) and (2) Once the percent retained is calculated. The cumulative percent retained for each sieve tray is calculated by adding the percent retained from the largest size sieve to the current size sieve, and then the percentage of the sand passing through the current sieve size can be obtained through Equation (3): The sand grading curve is plotted and given in Figure 2. The cumulative percent passing is plotted against the sieve size (in logarithmic scale). The soil grading curve given above proved the fact that the soil used in the greyhound racing track is loamy sand, combination of sand with traces of clay [39]. The slight difference in each test can be attributed to the loss of soil during the test and therefore caution should be taken while conducting the test.
The sand moisture content and compaction rate are two important parameters that alter the mechanical behaviour of the sand. To study the effect of these two parameters on the dynamic behavior of the sand, collected from a typical sport arena (in our case, it was collected from a typical greyhound racing arena), an impact test which complies with AS 1289 Part 2.1.1 Standard [38], can be applied.
To perform the impact test, a conventional Clegg hammer was modified, by mounting two calibrated laboratory-grade Endevco high-G accelerometers. Adding the high-G precision accelerometers allowed a higher degree of experimental accuracy than that offered by the standard Clegg hammer. The reliability of the system was tested in previous studies on children's playgrounds for impact attenuation of surfacing [36,37].
The dynamic behaviour of the sand sample was studied by analysing the impact data, namely the maximum acceleration (G max ), the maximum rate of change of acceleration (Jerk) (J max ), the impact duration (contact time), and the energy loss. Before treating the sand sample, it should be again air or oven dried. The AS 1289 Part 2.1.1 Standard [38] is used to dry the sand sample. Based on the Standard, the sand sample was heated up in an oven, between 105 to 110 • for 16 to 24 hours.
The effect of three moisture levels-dry (12%), medium to ideal (17%), and ideal (20%); and three rates of compaction: low traffic (1.35 g/cm 3 ), medium traffic (1.45 g/cm 3 ), and high traffic (1.55 g/cm 3 ), on the dynamic behaviour of the sand sample, were studied. The density of the sand to replicate the traffic condition of the surface was previously used by Holt et al. [3].
For all three conditions, we used a cylindrical container with an inner diameter of 15.6 cm. The sand was filled at 3.0 cm increments until reaching the depth of 12.0 cm. The applied tampering here was manual. Preferably, the tamper should be equipped with an accelerometer which can provide a measure of the applied force. However, achieving a certain depth was the only possible control we could apply. The average of sand density (the mass of the sand sample divided by its volume) for the simulated traffic conditions was also calculated and given as follows: • Low traffic condition: The top 3.0 cm layer was raked. The average of sand sample density for all moisture contents was 1.35 g/cm 3 . This traffic condition is pictured in Figure 3A. • Medium traffic condition: The top 3 cm top layer was struck with a tamper to achieve the depth of 14 cm. The average of sand sample density for all moisture contents was 1.45 g/cm 3 . This traffic condition is pictured in Figure 3B. • High traffic condition: The top 3 cm top layer was struck with a tamper to achieve the depth of 13 cm. The average of sand sample density for all moisture contents was 1.55 g/cm 3 . This traffic condition is pictured in Figure 3C. After preparing the sand sample, an impact attenuation test, which complied with the ASTM-F3146 Standard [20], was conducted from three different heights, namely: 400 mm, 500 mm and 600 mm. Based on the Standard, the test was repeated four times from each height, and the maximum value was reported that is, the maximum value for G max , J max and contact time. After the fourth drop at each height in the same location, the sand sample was reconstructed to avoid the effect of over compacting of the lower layers on the results. The impact attenuation data were then post-processed using LabVIEW software and plotted in MATLAB R18. An ANOVA test (two-factor with replication) was conducted. Values of p ≤ 0.05 were considered statistically significant. The experimental setup is illustrated below in Figure 4.
Pre-Surface Condition Data to Test Whether There Is a Correlation with Injuries
It is argued above that the sand moisture content and density affect the dynamics behavior of sports sand surfaces. It is also seen in a mathematical model of greyhounds that subtle changes in these two values significantly change the forces exposed to animals limb (mainly the hind-leg) [25]. Accordingly, measures should be in-place to correlate the sports sand surfacing moisture content and firmness with the probability of the injuries, which is also advised for other sport arenas, such as horse racing [40].
In Australian greyhound racing industry the sand moisture content and firmness are measured using a portable moisture meter (The instrument to measure track moisture content is typically the In this section, surface condition data for a de-identified greyhound racing track are analysed for a duration of one year, July 2019 to July 2020, when an increase in the rate of catastrophic incidents was observed. The hypothesis was that the moisture and firmness range would not fall within the recommended range.
Moreover, any inconsistency on the track surface is dangerous and can cause an injury [8,26,28] as the greyhound is not capable of adjusting its gait based on changing surface conditions [27]. Apart from assessing whether the moisture and firmness data fall within the recommended range, the fluctuation between inside and middle track readings should be calculated at different vicinity of the track. It is hypothesised that high fluctuation in theses values suggests irregular surface properties, which might contribute to injuries.
The injury heat-map (the approximate locations on each track for each race distance where clusters of injuries occurred) are generated, based on the injury data provided from the industry, race video and the Stewart reports, and given in the later section. This would assist in finding a correlation between the surface condition data and locations of the track with high rate of injuries.
Use of Accelerometry to Study the Limb-Surface Interactions of Sprinters
Accelerometry or in other words use of accelerometer to record the locomotion dynamics of athletes are gaining attention as they are cheap, user friendly and provides fundamental information about the gaits. They are usually attached to subject joints and fused with each other for post-processing. In this section, the most recent accelerometry study on racing greyhounds is reviewed [4].
To study the effect of surface compliance on the galloping dynamics of racing greyhounds, an IMU, equipped with tri-axis accelerometer (sampling rate of 185 Hz , was used on two tracks), was used to measure the associated galloping accelerations in racing greyhounds. It was hypothesised the greyhound galloping dynamics are different on different surface types (sand surface vs grass surface).
The Anterior-Posterior (fore-aft) and Dorsal-ventral signals (vertical) acceleration signals, recorded via the IMU, were analysed, to see whether the surface type affect the locomotion dynamics of greyhounds. To do so, signals of galloping on straight sections of the sport arena, are compared with each other.
The recorded Dorsal-Ventral acceleration due to hind-leg strikes was more than triple that of the fore-leg strikes (15 G vs. 5 G). These results were in consistent with the role of hind-legs in powering the locomotion as well as their higher rates of injuries than fore-legs.
The mechanical properties of the sand and grass surface, mainly the impact deceleration (G max ) measured via a Clegg hammer of the sand surface were three times higher than that of the grass track [4]. Accordingly, it was expected to see higher acceleration on running on the sand surfacing than the grass one. However, the IMU data (the average of peaks of dorsal-ventral and anterior-posterior acceleration) for sand versus grass surfaces were not significantly different.
There might be different reason associated to the observed results that is, not significant difference in IMU signals despite the significant difference on surface type. Firstly, the IMU in this study is mounted on animal's neck (Figure 5a) and the signals would be damped while traveling through the body of the animal. Ideally, the IMU should be attached on animal's foot to sense the real impact load. Secondly, the applied signal processing method in this work are those usually used for linear time-series signals. It is hypothesised that applying nonlinear-time-series-analysis would identify different features in galloping over sand and grass. These methods are currently under the investigation by the same author of this work. Figure 6A-I shows the impact acceleration versus time of the sand sample with three different moisture levels and rates of compaction. The peak of each impact acceleration is the maximum acceleration (G max ). The red, blue and black lines represent 400 mm, 500 mm and 600 mm drop heights, respectively.
A Drop Test to Study the Dynamic Behaviour of the Sport Sand Surfaces
The main observation from Figure 6 and impact data given in Table 2 is that, regardless of the moisture content, increasing the compaction rate of the sand sample, has resulted in an significant increase in the G max [p = 0.0003 F = 12 The same effect is also seen when the moisture content is increased (while the sand density is kept constant), mainly when the moisture level increased from 12% to 17%, which was statistically significant [p = 0.054 F = 4.21].
Increasing the drop height increased the velocity at the time of the impact and the higher the initial impact velocity, the higher the value of the G max . This reveals the rate dependency of the sand [29]. [42] in analysing the performance of athletic shoes with hard and soft soles). The red, blue and black lines represent 400 mm, 500 mm and 600 mm drop heights, respectively.
To see whether the moisture content affects the stiffness coefficient of the sand, sand sample with the same density but different moisture content were compared with each other. It is observed that increasing the moisture contents (Here after moisture content) within a 12-20% range, increases the stiffness coefficient.
In the low traffic condition, this increase is 87% when the water content is changed from 12% to 17%, and only 24% when it is changed from 17% to 20%. In the medium traffic condition, this increase is 26% when the water content is changed from 12% to 17% and 55% when the moisture content increases from 17% to 20%. Similarly, in the high traffic condition altering the moisture content from 12% to 17% increases the stiffness by 47% and increasing the water content from 17% to 20%, increases the stiffness coefficient by 16%. This behaviour suggest there is a nonlinear positive relationship between the moisture content and the stiffness coefficient.
To see whether the sand density affect the stiffness coefficient of the sand sample, the samples stiffness coefficient are compared with each other while keeping the moisture content constant.
For a sand sample with 12% moisture content, the stiffness coefficient increases up to 95% and 84% , as the rate of compaction is altered from the low to medium traffic condition and from medium to high traffic condition, respectively. For a sand sample with 17% moisture, this increase is up to 41% and 92%, as the sand samples are compacted from the low to medium traffic condition and from the medium to high traffic condition, respectively. For a sand sample with 20% moisture content, this increase is up to 76% and 45% increase, as the sand samples are compacted from the low to medium traffic condition and from medium to high traffic condition, respectively. Increasing the sand density increases the stiffness coefficient of the samples. This is because when increasing the sand density, the interlock between sand particles will increase, hence increasing the stiffness [43].
The moisture content and traffic conditions of the sand samples, G max , J max , contact time (ms), energy loss (by calculating the area under the load-deformation plots), and the calculated stiffness coefficients (K), are tabulated in below Table 2. In the provided results in Table 2, the contact time was not affected by the moisture level of the sand samples, but it significantly decreased with increases of the density of the sample. Thus, low to medium density of the sand sample was found to provide the favourable range of contact time with regards to injury prevention.
It is observed that altering the moisture content, significantly increased the G max and J max with no substantial change seen in the contact time. Moreover, the rates of compaction significantly increased all the impact data. It is also argued that the high G max and J max and short contact time were associated with high injury rate. Accordingly, comparing all the impact data it seems that the sand sample with 20% moisture content in a low traffic condition resulted in the most favourable behaviour with regards to both the injury prevention and race performance. The sand sample in this condition had the lowest energy loss compared to all other cases. The contact time was also in the favourable range as mentioned above. Finally, the G max and J max values were relatively low.
Use of Pre-Surface Condition Data to See If They Correlate with Injuries
The first step to analyse whether sand moisture content and firmness data correlate with the injury, for those race events with catastrophic incidents, the range for moisture content and the range for sand firmness were checked to see if they all fall within the recommended range. It is hypothesised that sand moisture and firmness range should not fall within the recommended range.
The second step to analyse whether sand moisture content and firmness data correlate with the injury, the fluctuation between the inside and middle track data (both moisture content and firmness values) should be calculated and compared with the overall fluctuation between the inside and middle track readings. It is hypothesised that any noticeable fluctuation between the inside and middle track readings contribute to injuries. To test this hypothesis, the injury location of the catastrophic incidents, determined using the injury heat-map given in Figure 8, were compared with high fluctuation arenas and the results are provided below.
A de-identified track heat map is presented in Figure 8. This heat map was generated using race videos, the Steward's reports and the injury data recorded by on-track veterinarians. The red circles on heat map represent the number of injuries at each specific injury location around the track. The larger the radius on the heat map, the higher the injury rate. It should be noted that only catastrophic incidents resulting in death were used to generate the injury heat-map. The injury location, the sand surface moisture content range, the sand firmness value range, and the high fluctuation vicinity on the track are given in below Table 3.
Comparing the injury heat-map with Table 3 the injury heat-map correlates with the high fluctuation areas. The fluctuation between inside and middle track moisture of approximately 80% of the catastrophic injuries had the highest percentage at the injury vicinity among other locations of the track; The sand moisture content of approximately 80% of the races with catastrophic injuries fell within the recommended range. The sand firmness data of all (those that were available) of the races with catastrophic injuries fell within the recommended range.
Having high fluctuation between the inside and the middle track surface properties would expose trailing greyhounds to a running surface with different properties as they tend to jostle and change direction to avoid bumping and checking. Any sudden change in the surface condition will contribute to injuries. The main maintenance practice which can assist in having a homogeneous surface condition is called harrowing. The depth and frequency of the harrowing practice is subjective to the sport arena, frequency of races and trials, season of the year, weather condition and rain fall, and more importantly, should be accompanied with an appropriate irrigation management. As discussed above, the sand moisture content and density are two important factors affecting the dynamic of the race track. It is also seen that, a relatively wet (20% for the under-studied sand in the laboratory condition) sand with low-traffic condition (3 cm raked top layer) was ideal in terms of both performance and safety. Accordingly, it is recommended that the harrowing should be conducted on a regular basis and with sufficient depth on a surface which is consistently irrigated through an appropriate irrigation system.
Conclusions
The sports sand surfacing is used in different sports and is proven to contribute to both increased athletic performance and a decrease in the risk of injuries. The first step to engineer an optimum sand surface is understanding the dynamic behaviour of sand surfacing. In this work, different methods to study the dynamic behaviour of sand surfacing used in the sporting industry are provided and where applicable, were backed-up with empirical data. Analysing the the impact data of laboratory-based experiments provided insights into how subtle alternation between the sand moisture content and density can significantly affect its dynamic behaviour upon impact force. Analysing the sand moisture content and firmness data, collected via portable moisture meter and penetrometer device, prior to a greyhound racing race, showed that high fluctuation between these values along the width of the track (mainly the inside and middle regions) can contribute to catastrophic incidents. The study provided in this work can contribute to the standardising of sport sand surfacing in sports other then greyhound racing such as volleyball. Funding: The work is funded by Greyhound Racing New South Wales with UTS institution reference of PRO17-3051.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,582.8 | 2020-10-29T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Combination of computational techniques and RNAi reveal targets in Anopheles gambiae for malaria vector control
Increasing reports of insecticide resistance continue to hamper the gains of vector control strategies in curbing malaria transmission. This makes identifying new insecticide targets or alternative vector control strategies necessary. CLassifier of Essentiality AcRoss EukaRyote (CLEARER), a leave-one-organism-out cross-validation machine learning classifier for essential genes, was used to predict essential genes in Anopheles gambiae and selected predicted genes experimentally validated. The CLEARER algorithm was trained on six model organisms: Caenorhabditis elegans, Drosophila melanogaster, Homo sapiens, Mus musculus, Saccharomyces cerevisiae and Schizosaccharomyces pombe, and employed to identify essential genes in An. gambiae. Of the 10,426 genes in An. gambiae, 1,946 genes (18.7%) were predicted to be Cellular Essential Genes (CEGs), 1716 (16.5%) to be Organism Essential Genes (OEGs), and 852 genes (8.2%) to be essential as both OEGs and CEGs. RNA interference (RNAi) was used to validate the top three highly expressed non-ribosomal predictions as probable vector control targets, by determining the effect of these genes on the survival of An. gambiae G3 mosquitoes. In addition, the effect of knockdown of arginase (AGAP008783) on Plasmodium berghei infection in mosquitoes was evaluated, an enzyme we computationally inferred earlier to be essential based on chokepoint analysis. Arginase and the top three genes, AGAP007406 (Elongation factor 1-alpha, Elf1), AGAP002076 (Heat shock 70kDa protein 1/8, HSP), AGAP009441 (Elongation factor 2, Elf2), had knockdown efficiencies of 91%, 75%, 63%, and 61%, respectively. While knockdown of HSP or Elf2 significantly reduced longevity of the mosquitoes (p<0.0001) compared to control groups, Elf1 or arginase knockdown had no effect on survival. However, arginase knockdown significantly reduced P. berghei oocytes counts in the midgut of mosquitoes when compared to LacZ-injected controls. The study reveals HSP and Elf2 as important contributors to mosquito survival and arginase as important for parasite development, hence placing them as possible targets for vector control.
Introduction
Vector control interventions remain potent strategies for controlling the transmission of malaria, a disease that remains a global menace [1].These interventions include larval control methods, use of insecticide-treated nets and indoor residual spraying [2].However, these vector control strategies are approaching the limit of their effectiveness [3], resulting in the consistently high morbidity rates reported annually [1].Consequently, there is an urgent need to introduce innovative vector control strategies.Other vector control interventions such as RNA interference (RNAi) based biopesticides, generation of refractory mosquitoes, sterile insect techniques are currently gaining attention.However, these techniques are dependent on the identification of appropriate targets.An important tool in functional genomics which can be used to investigate and characterize promising targets is RNAi, an important gene silencing technique that provides insight into the function of a gene and can be successfully applied to investigate gene knockdown in mosquitoes [4].Using this technique, the synthesis of proteins that play a role in the survival, fecundity, metabolism, vectorial capacity, and the behaviour of mosquitoes and insects generally, have been suppressed, thus, unveiling some of these proteins as possible targets for vector control.For example, RNAi provided insight on the role of TEP1, LRIM1, and APL1 as crucial genes for the immune response in Anopheles [5,6].Similarly, several proteins involved in insecticide resistance, transport of molecules, reproductive fitness, and host-seeking behaviour have been identified through RNAi.Examples include ABCG4 [7], CYP 450s [8], c-Jun N-terminal kinase (JNK) pathway components [9], Aquaporin 3 [10], and G protein-coupled receptors (GPCRs), which play a role in development, visual, gustatory and olfactory sensing, homeostasis, and hormonal regulation [11].
RNAi has been described to be a promising technique for pest and mosquito control [12][13][14][15][16].It is an important tool for identifying potential insecticidal targets that could be explored to develop insecticides or other products to limit the burden of mosquitoes on human health [17].The accuracy and specificity of the technique make RNA-based biopesticides to be considered as alternatives to chemical-based insecticides [18][19][20].Since they are specific, unwanted effects could be reduced and thus having reduced or no effects on non-target organisms.Some targets which have been investigated as RNAi biopesticides for vector control include chitin synthase [21] and 3-hydroxykynurenine transaminase [17,22].Likewise interfering RNA pesticide (IRP) corresponding to the mosquito Shaker and GPCR dopamine 1 receptor (dop1) genes have been tested and found to have adulticide and larvicidal activities against different mosquito species [23,24].However, employing experimental techniques to screen every gene in a disease vector to identify potential targets is a tall order.
In turn, computational methods can be employed to predict essential proteins in organisms [25] Essential genes are considered to be crucial for the survival or reproductive success of an organism [26,27].These essential genes in disease vectors could serve as possible vector control targets.Employing computational techniques can lead to refined smaller lists of genes, which could then be validated by experimental techniques to determine their suitability for vector control.Computational techniques for predicting essential genes range from simple algorithms like chokepoint analysis in metabolic networks to more complex techniques like machine learning.Chokepoint analysis has been employed to identify possible insecticidal targets in Anopheles gambiae, a major malaria vector [28].An example of a gene predicted as essential in An. gambiae using the chokepoint criteria is arginase, which was observed to be highly expressed in the midguts of Plasmodium berghei infected mosquitoes compared to their blood-fed counterparts [29].Hence, arginase could contribute to the development of the parasite in the mosquito and could possibly serve as a target for vector control.However, experimental validation of these predictions remains to be done.Beder et al. [30] developed a machine learning based-technique trained on six model organisms to predict essential genes using a combination of leave one organism out cross-validation and orthology based approaches.In this present study, we (i) experimentally followed up on the computational prediction of arginase to be essential, and (ii) applied the machine learning method to predict essential genes in An. gambiae, and experimentally validated a selected short list of the predicted genes using the RNAi knockdown technique.
The machine learning method to identify essential genes in silico
We applied a modification of the CLassifier of Essentiality AcRoss EukaRyote which is a machine learning classifier for essential genes which was trained on the six model organisms: Caenorhabditis elegans, Drosophila melanogaster, Homo sapiens, Mus musculus, Saccharomyces cerevisiae and Schizosaccharomyces pombe [30].The machine was trained on 60,381 genes, using 41,635 features based on seven different sources including protein and gene sequence, functional domains, topological features, evolution/conservation, subcellular localization, and gene sets from Gene Ontology.
Feature generation.Essential genes for An.gambiae were predicted using the same features used for the model organisms.The An. gambiae str.PEST genome (GenBank assembly accession: GCA_000005575.1) was used to generate the gene and protein sequence features.The tools seqinR [31], protr [32], CodonW (http://codonw.sourceforge.net/)and rDNAse [33] were used to calculate protein and gene sequence features.For genes with isoforms, the features were generated individually for each isoform and the median of all was calculated.seqinR provided simple protein sequence information including the number of residues, the percentage of physico-chemical classes and the theoretical isoelectric point.Most protein sequence features were obtained using protr comprising autocorrelation, conjoint triad, quasi-sequence order and pseudo amino acid composition.CodonW was used to calculate simple gene descriptors like length and GC content, frequency of optimal codons and the effective number of codons.rDNAse provided DNA descriptors such as auto covariance, pseudo nucleotide composition, and kmer frequencies (n = 2-7).Domain features were calculated using the tools from the Technical University of Denmark (http://www.cbs.dtu.dk/services/),comprising the prediction of membrane helices and beta-turns, cofactor binding, acetylation and glycosylation sites.Topology features were derived from protein-protein-associations (PPA) using the STRING v11 [34] database.These features comprised of degree, degree distribution, betweenness, closeness and clustering coefficient using the Python library NetworkX.Conservation features were calculated by the number of homologous proteins of a query protein in the complete RefSeq database [35] using PSI-BLAST [36].As features, the number of proteins identified with e-value cutoffs from 1e−5 to 1e-100 (in 1e−5 multiplication steps) were used.An alignment coverage score (ACS) was calculated for hits with a cutoff �1e−30 as we described formerly [37].Furthermore, the number of homologous sequences with a score from 0 to 0.95 in 0.05 steps were calculated.Similarly, the number of paralogous sequences were calculated.
Here, blastn alignment results with an e-value cutoff �1e−30 were used as input for the score.Subcellular localization features were predicted by the tool DeepLoc [38].DeepLoc assigns a score for each protein to its localization in 11 eukaryotic cell compartments.Gene set features were derived from all Gene Ontology (GO) terms present in all analyzed organisms, similar to Chen et al. [39].Here, not only the characterization of the query gene was taken into account, but also of its neighbors in the protein association network.By this, the features were more robust against false gene set annotations.The neighbors of the query gene were assembled employing the gene network definitions of STRING v11.A Fisher's exact test for enrichment of interaction partners was performed for each of the gene sets.The log 10 values of the P-values were used as features.
Defining the gold standard.Essentiality information was derived from the six species: C. elegans, D. melanogaster, H. sapiens, M. musculus, S. cerevisiae and S. pombe.For D. melanogaster, H. sapiens and M. musculus, screening data was collected from screens of cellular essential genes (CEG) and organismal essential genes (OEG).For C. elegans, and the yeasts only CEG screening data was available.This essentiality information was derived from Online GEne Essentiality (OGEE) [40] and Database of Essential Genes (DEG) [41] databases and the literature (for more details see [30]).For genes with different essentially status in different screens, a majority voting was performed.For human cell line screens, a gene had to be studied in at least five experiments as described formerly by Guo et al. [42].
Normalization, feature reduction and machine learning.Data analysis was performed using R. Values of each feature were z-transformed and each value was rounded to deciles.For feature selection and learning, the data was randomly split into training (80%) and testing (20% of the data).Using the training set, feature selection was performed in two steps: first, LASSO was applied using the glmnet package [43] (cv.glmnet function, alpha = 1, type.measure= 'auc').In the second step, collinearity was reduced by removing highly correlating features with Pearson correlation coefficients r � 0.70.Next, class imbalances were addressed during training using SMOTE [44].The classifiers were trained using Random Forest (RF) from the caret [45] package.For RF, tuneLength in the train function was set to 3 resulting in three predictors randomly sampled at each split.For each organism, a stratified randomized 5-fold cross-validation was performed in which feature selection, parameter tuning and training of the classifiers was done using 80% of the data.20% of the data was used for testing the performance.Leave-one-organism-out cross-validation: For each individual species (five species for CEG, four for OEG predictions), five machines were trained.Essential genes for the left-out species were predicted with machines trained on the according CEG or OEG data sets of the other organisms.Thereby the classifiers for each (non-left out) species provided an essentiality prediction score between zero and one and the average of these scores was used for the prediction of a gene to be essential in the left-out species.TPM values from an RNA-seq dataset (E-MTAB-9241) were used to aid selection of predicted genes for the experimental phase [46,47].
Experimental methods
Mosquito handling.An. gambiae G3 strain was maintained at 27-29˚C and 75-90% relative humidity with a 12:12 light-dark photoperiod.Adults were provided with 5% sucrose solution ad libitum [48].For reproduction, female mosquitoes were fed on BALB/c mice for 30 min.Egg dishes were placed in cages 48 h post-blood-feeding and retrieved 24 h later.Mosquitoes were reared according to MR4 protocol [49].Eggs were bleached with 1% bleach and rinsed three times with 0.05 g/L salted deionized water upon egg collections.Bleached eggs were hatched on the following day in a tray containing 500 mL of 0.05g/L salted deionized water and larva food to a final concentration of 0.02%.Splitting of L1 larvae was carried out either in the evening of the day of hatching or the day after.Larvae were split into trays, with each tray containing approximately 250 larvae in 500 mL of 0.05 g/L salted deionized water and larva food to a final concentration of 0.02%.A volume of food (3-10 mL) was added to the tray daily depending on the larvae stage.Larvae food (50 g) comprised a mixture of tuna fish meal (20g), liver powder (20g), and vitamin mix (10g).An aliquot of larvae food (1 g) was dissolved in 50 mL of water (yielding 2%), and appropriate volumes were taken from this.Upon pupation, pupae were collected into cups having clean water and placed in a cage to allow the emergence of adults.
Mice handling.BALB/c mice were used as P. berghei vertebrate hosts and in blood-feeding mosquitoes for reproduction.The mice were maintained in the animal facilities of the University of Camerino, Camerino, Italy.All animal rearing and handling was carried out according to the Italian Legislative Decree (116 of 10/27/92) on the "use and protection of laboratory animals" and in agreement with the European Directive 2010/63/UE.The experimentation was approved by the Ethical Committee of University of Camerino.According to the method of Cappelli et al. [50], BALB/c mice were maintained at 24˚C, fed on standard laboratory mice pellets (Mucedola S.r.l., Milano, Italy) and provided with tap water ad libitum.Mice were anesthetized using a mixture of 10 mg/mL prequillan (ATI-srl), 20 mg/mL sedaxylan (Dechra) and 1X Phosphate-buffered saline (PBS).Each mouse was injected with 0.1 mL of this mixture intraperitoneally and used to feed mosquitoes 15 mins after injection.
Primers.Primers with T7 tail for dsRNA synthesis were designed using the E-RNAi website (https://www.dkfz.de/signaling/e-rnai3/).All qPCR primers were designed using primer wizard in Benchling (www.benchling.com),which is powered by Primer3 (https://primer3.org/).Care was taken to ensure that designed primers were target gene specific.Primers were synthesized by Metabion (www.metabion.com).All primers used in this study are provided in S1 Table .RNA interference.Total RNA extraction from a pool of five whole mosquitoes was carried out using RNAzol RT (Sigma) according to the manufacturer's instructions.Complementary DNA (cDNA, 10 μL) was synthesized from 500 ng total RNA using PrimeScript RT reagent kit (Takara Bio) according to the manufacturer's instruction and incubated at 37 ℃ for 15 min, then 5 ℃ for 5 s.A fragment from each target gene was amplified using cDNA and target specific primers having the T7 promoter tag sequence TAATACGACTCACTATAGGG incorporated into their 5' end (to enable in vitro transcription by T7 polymerase).PCR fragment of LacZ served as a control and was synthetized from E. coli expressing LacZ using LacZ specific primers.All primers used are provided in S1 Table .An aliquot of this PCR reaction was used as a template for dsRNA synthesis in vitro using TranscriptAid T7 High Yield Transcription Kit (Thermo Scientific) and purified following the manufacturer's instruction.DNA was removed by DNase I digestion, while all proteins and free nucleotides were removed by phenol chloroform extraction according to the manufacturer's protocol.dsRNA was eluted with DEPC treated water and its concentration measured using Nanodrop 1000 spectrometer (Thermo Fisher Scientific, USA).Gel electrophoresis (1% TBE agarose) was performed on an aliquot of the PCR products to confirm that products of the expected size were synthesized for each gene.Similarly, aliquots of the synthesized dsRNA were evaluated on a 2% TBE agarose gel.
Mosquito injection.Two to three-day-old female mosquitoes were anaesthetized on ice and injected with 138 nL (5 μg/μL) target-specific dsRNA or LacZ control dsRNA using a Drummond Nanoject II Automatic Nanoliter Injector (3-000-205-A, Drummond Scientific, Broomall, PA, USA) and glass microcapillary injection needles according to the method described by Mancini et al. [48].The microcapillary injection needles used were obtained by using Flaming/Brown Micropipette Puller System Model P-1000 (Sutter Instruments Company, Novato, CA, USA) to pull glass capillaries (BF150-86-10, Sutter Instruments, Novato, CA, USA).For survival experiments, anesthetized mosquitoes injected with 1X PBS were used as a handling control.Mosquitoes not subjected to injection, but had undergone cold anesthetization, were used as an untreated control.Mosquitoes which died within 24 hours post-injection were discarded from the analysis.Fifteen mosquitoes were injected to determine knockdown efficiency by quantitative-PCR (qPCR) analysis (5 mosquitoes per replicate, 3 replicates), while 70 female mosquitoes per treatment were used for longevity analysis (70 mosquitoes per replicate, 4 replicates).For P. berghei infection, 200 female mosquitoes per treatment were injected.After injections, mosquitoes were maintained in the insectary under standard conditions.
Real-time quantitative PCR (RT-qPCR).RNA was extracted from whole mosquito samples at intervals of 24 h post-dsRNA injection-24 h, 48 h, 72 h and 96 h for the arginase experiment.For the experiments of HSP, Elf2 and Elf1, RNA was extracted from whole mosquito samples three days post-dsRNA injection.RNA was reverse-transcribed into cDNA using iScript™ gDNA Clear cDNA Synthesis Kit (BioRad).Real-time quantitative PCR was conducted using 4 μL of HOT FIREPol1 EvaGreen1 qPCR Supermix (Solis Biodyne, Estonia), 2 μL of template (cDNA), 2.5 μL of 1 μM Primer mix (combined forward and reverse primers) which was made up to a final volume of 20 μL using nuclease-free water.PCR amplification was performed by preheating the reaction to 95˚C for 12 min, followed by 40 PCR cycles (95˚C for 30 s, 60˚C for 30 s, and 74˚C for 30 s), a melt curve run from 65-95 ℃, with an increase in temperature by 0.5 ℃/cycle every 5 s.An. gambiae ribosomal S7 gene was used as an internal reference gene for the normalization of each target gene, and the results were normalized further against the LacZ injected control group.Relative expression levels of the genes were reported as 2-ΔΔCT [51].The amplification efficiency of all qPCR primers used was determined.
Survival assay.Longevity was assessed to determine whether knockdown of HSP, Elf2 or Elf1 affected An. gambiae survival.Longevity was evaluated using a total of 280 female mosquitoes per treatment (70 mosquitoes per replicate, 4 replicates).The rate of survival was monitored until 100% mortality was reached for all six treatments (HSP dsRNA, Elf2 dsRNA, Elf1 dsRNA, LacZ dsRNA, PBS, and not injected).
For arginase knockdown, longevity was assessed using a total of 140 female mosquitoes per treatment (70 mosquitoes per replicate, 2 replicates).The setup included three treatment groups-Arg dsRNA, LacZ dsRNA, and the not injected treatment groups.All mosquitoes, including those in the not injected treatment groups, were exposed to a naïve blood meal from mice 48 h post-dsRNA injection.A blood meal was introduced because an effect of knockdown of these genes on P. berghei development was hypothesized.The rate of survival was monitored until 19 days post-blood meal.Survival analyses were performed using the Kaplan-Meier method [52,53] and significance between groups determined by Log-rank (Mantel-Cox) tests.Graphs were plotted using the ggsurvplot function in the R-programming software [54].
P. berghei infection.The murine malaria parasite, P. berghei was used in this study as a model organism for investigation of human malaria.Three mice were infected with P. berghei (GFP CON, PbGFP CON ) from frozen capillary stocks diluted in 200 μL of PBS (7.2 pH) through intraperitoneal injection.The level of parasitemia in mice was determined four days after infection using slides with methanol fixation of air-dried blood smears taken from the tail of the mice, followed by staining with 15% (w/v) Giemsa solution.Parasitemia was counted under an optical microscope (Olympus CX21).The mouse with the best parasitemia was selected as the donor mouse for passaging to recipient mice.Infection of recipient mice was carried out according to the method of Cappelli et al. [50] with slight modification.Eightweek-old female mice (18-25 g) were infected with P. berghei directly by an intraperitoneal injection of 5 x 10 6 infected erythrocytes (from a donor mouse) diluted in 200 μL of PBS (7.2 pH).Parasitemia and gametocytemia were determined in a 15% Giemsa-stained blood smear obtained from recipient infected mice three days after infection using an optical microscope.Recipient mice were used in feeding mosquitoes when parasitemia was between 8 and 11%, and gametocytemia was between 1.2 and 3.8.Prior to feeding of mosquitoes with mice, mice were anesthetized using a mixture of 10 mg/mL prequillan, 20 mg/mL sedaxylan and 1X PBS, the mice were used to feed mosquitoes 15 mins after being anesthetized.
Non-fed and partially fed females were removed from the cage.To allow parasite development, mosquitoes were kept at 19 ℃ and 70% humidity.Midguts from a total of 50 mosquitoes per treatment group were dissected 10 days post-blood meal (PBM) to confirm the presence of oocysts using fluorescent microscopy.The number of oocytes in each midgut was counted and compared to the LacZ control.
Statistical analysis
Statistical analysis on gene expression data and oocytes count data were performed at a 95% confidence interval in GraphPad Prism 5. Relative gene expression data were presented as mean ± SEM, and statistical significance was determined by one-way analysis of variance (when comparing more than two groups) and Bonferroni post hoc test.Where only two groups were compared, the paired T-test was used.Data from survival analysis were analyzed using the Kaplan-Meier method [52], presented as mean ± confidence interval, and a Logrank test was used to determine whether survival curves were statistically significant between the target-specific dsRNA treated groups and control groups.Survival curves were plotted using R software.Oocytes count data from infection experiments was presented as median on a dot plot, with each dot representing the number of oocytes/midgut.Statistical significance was between oocytes count from LacZ and the treatment group was determined by a Mann Whitney test.
Machine learning results
Predictions were carried on a total of 10426 genes in An. gambiae.Using the CLEARER approach, 1946 genes (18.7%) were predicted to be CEGs, 1716 (16.5%) to be OEGs and 852 genes (8.2%) to be essential as both OEGs and CEGs.Using the orthology based approach, only 249 (2.4%) genes were predicted to be essential.Combining both CLEARER and orthology based approach, only 94 genes (0.9%) were predicted to be essential (Fig 1).The results of the predictions for all 10426 genes in An. gambiae are provided in S1 File.
Some of the genes predicted as essential based on the three approaches had very low prediction scores based on the CLEARER approach (<0.2).To proceed further, we focused on the essentiality scores from CLEARER, selecting the top 250 ranked genes, which belonged to the top 2.5% of the scores.We proposed that an essential gene in addition to being essential in our predictions, should be highly expressed.Only thirteen of the genes belonging to the top 2.5% had high gene expression values (i.e., mean TPM > 1000), suggesting they are highly expressed in all conditions.The median, 25th and 75 th percentile TPM values of all genes are provided in S1 File.We experimentally validated the top 3 highly expressed predicted genes which were non-ribosomal genes.These genes are presented in Table 1.
While AGAP009441 was predicted to be essential by all three methods (belonging to one of the 98 genes in Fig 1), AGAP007406 and AGAP002076 were predicted to be essential as both CEGs and OEGs, but not essential by orthology based approach (thus, belonging to the 754 genes in Fig 1).However, both genes had 14/66 (21%) and 6/17 (35%) essential orthologs, respectively and their D. melanogaster ortholog are conditionally essential based on OGEE Essentiality.
Effect of knockdown of arginase on survival of mosquitoes and P. berghei development
The primer efficiency of all qPCR primers used in this study is provided in S2 Table after dsRNA injection since arginase was considered a possible target for the abrogation of P. berghei development in the mosquitoes.It was observed that arginase levels were significantly reduced (p<0.01) in naïve blood-fed mosquitoes 24 h after a blood meal compared to their sugar-fed counterparts (%KD = 62%) (Fig 2D).Likewise, arginase levels were greatly reduced (p<0.001) in blood-fed arginase dsRNA-treated mosquitoes compared to their blood-fed LacZ dsRNA-treated counterparts 24 h after blood meal (Fig 2E).This showed that blood feeding did not mask or revert gene silencing.Since arginase levels are reduced during a naïve blood meal, arginase might not be essential for the survival of mosquitoes.Silencing of arginase was noted to significantly reduce (p = 0.020) the number of Plasmodium oocytes counts in the midgut of mosquitoes compared to the LacZ dsRNA injected control group (Fig 2F ).
Effect of knockdown of HSP, Elf2 and Elf1 on survival of An. gambiae
An. gambiae G3 mosquitoes injected with either HSP, Elf2 or Elf1 dsRNA had a significant reduction (p<0.001) in the expression of the respective gene 72 h post-injection compared to control groups injected with LacZ dsRNA (Fig 3A
Discussion
Arginase was considered a possible target for disrupting P. berghei development in the mosquitoes due to its observed increased expression in midgut upon P. berghei infection [29].Longevity assay was carried out for arginase to evaluate the suitability of the gene knockdown for the Plasmodium infection experiment.The mosquitoes needed to survive longer, at least 10 days to allow time for Plasmodium infection, development, and assessment of Plasmodium oocytes counts.Despite the strong knockdown achieved by arginase gene-silencing, no effect on longevity of mosquitoes was noted in two longevity assay replicates (Fig 2A -2C).These two replicates did not show any variance.In both replicates, the survival of mosquitoes following knockdown of arginase was comparable to the control group.The study suggests that arginase might not be important for the survival of mosquitoes, which may be reasoned by the observation that its expression is significantly reduced during blood feeding (Fig 2D and 2E).Hence, the Plasmodium infection study was carried out.Similarly, no effect on survival due to arginase knockdown was observed during infection studies.In turn, knockdown of arginase resulted in a significant reduction in P. berghei oocytes counts per midgut at day 10 post-infection (Fig 2F ), suggesting that knockdown of arginase hampers the development of P. berghei.Arginase competes with nitric oxide synthase for the same substrate, arginine.Parasites, e.g., trypanosomes, have been reported to evade nitric oxide production in the host by activating the production of host arginase [55].This has been observed to result in a depletion of l-arginine, resulting in reduced levels of cytotoxic nitric oxide and enhanced production of polyamines required for parasite growth [56].Although this complete mechanism has not been elucidated in An. gambiae, it is proposed that knockdown of arginase might result in increased abundance of nitric oxide, thereby enhancing parasite clearance [57][58][59].However, this must be further investigated.In addition, arginase metabolizes arginine to produce ornithine, which is a precursor for polyamine synthesis.It has been shown that polyamines modulate Plasmodium infection [58,59], hence, knockdown of arginase may prevent their synthesis thereby reducing parasite load [57].Since the knockdown of arginase did not affect survival, it might be useful to investigate the effect of a complete knockout of arginase on the development of Plasmodium.Arginase shares 42.0% (E-value: 4e-87) protein sequence identity with its mitochondrial ortholog in human (P78540) and 45.4% (E-value: 3e-87) to the cytosolic human ortholog (P05089) although no significant similarity was found between their nucleotide sequences.Hence, from this study, arginase might represent a good target to explore for transmissionblocking in An. gambiae, with consideration made to developing highly selective inhibitors.The incomplete parasite clearance observed as a result of arginase knockdown could be because knockdown transiently reduces expression of genes and does not completely prevent protein synthesis, hence some arginase would be available.Likewise, immune response in mosquitoes to parasite invasion is complex, involving interaction of many proteins and cell types.
The timing and intensity of these interactions can result in different outcomes [60].Simultaneous knockdown of multiple proteins that influence these responses, resulting in enhanced parasite clearance might be necessary to achieve complete clearance.Knockdown of Elongation factor 1-alpha, Elf1 (AGAP007406) did not affect the survival of mosquitoes despite the strong knockdown observed upon Elf1 dsRNA treatment (Fig 3A -3C).This observed result might be explained by the presence of an isoform for Elongation factor 1-alpha (AGAP003541) in An. gambiae.AGAP003541 had very low TPM values (<1) in the RNA-seq data used in this study, as compared to AGAP007406 that had TPM values of �8000 (see S1 File), hence AGAP007406 was considered to be the major isoform and was targeted by RNAi.Since dsRNA was designed to specifically target AGAP007406, increased expression of its isoform AGAP003541 might be triggered to perform the necessary function of the gene product, thereby counteracting the effect of silencing the AGAP007406 isoform.Elf1 is a housekeeping gene with a GTP binding protein product necessary for peptide elongation during protein translation [61], hence it is an important hub in protein networks [62].Inhibition of eukaryotic translation elongation factor 1 alpha 1 by Nannocystin Ax has been reported to inhibit translation of new proteins and downregulate cyclin D1, inducing G1 cell cycle arrest in colon cancer cells [61].This suggests the importance of this gene for cellular survival.To further investigate the essentiality of Elongation factor 1-alpha in mosquitoes, it would be necessary to design dsRNA that would target conserved regions between the two genes or use chemical inhibitors.4A-4C).The Elf2 is a GTP-binding protein essential for protein synthesis.It catalyses the translocation of 2 tRNAs, and mRNA on the ribosome following peptidyl transfer [63].Unlike Elf1, Elf2 is encoded by a single gene [64].Phosphorylation of Elf2 leads to its inactivation, consequently downregulating translation and reducing peptide chain elongation [65].Considering this crucial role of Elf2 and the uniqueness of its gene, its knockdown would result in reduced synthesis of proteins in the mosquitoes, which ultimately results in the death of the mosquito.Knockdown of Elf2 in mice downregulated expression and synthesis of proteins involved in histone and chromatin binding DNA helicase activity, while synthesis of ribosomal proteins was upregulated [64].This suggests that Elf2 is indispensable for cell division.It has also been reported that Fragment A of Diphtheria toxin, which is produced by Corynebacterium diphtheriae, inhibits protein synthesis through ADP ribosylation of Elf2 (ADP ribosylation leads to inactivation).Diphtheria toxin causes diphtheria, which results in 5 to 10% death in C. diphtheriae infected patients, and the mortality rate might be up to 20% in children < 5 years or adults > 40 years [66].Hence, inhibition of Elf2 may lead to death.While it is essential to investigate the effect of the knockdown on the transcriptome of the mosquitoes to identify the mechanism by which the observed death occurred, it is suggested that reducing levels of cell cycle proteins and other crucial proteins might contribute to the observed death.Elf2 has 78.7% (E-value: 0.0) protein sequence identity with its ortholog in human (P13639) and 79.5 gene sequence identity (E-value: 2.5e-94) to the human eukaryotic translation elongation factor 2 gene (EEF2) (NM_001961.4).Hence, caution must be taken in developing inhibitors that can be used as insecticide molecules against this target.Identifying unique and specific features in the protein of mosquitoes compared to humans can aid the development of highly specific inhibitors for this target [67].For example, studies have shown that selective acetylcholinesterase inhibitors can be designed for An.gambiae by targeting an unpaired cysteine residue present in the mosquito but absent in humans [68,69].Sequence alignment of Elf2 amino acid residues from An. gambiae (AGAP009441), An. stephensi (ASTEI20_042603), An. funestus (AFUN2_002633), Aedes aegypti (AAEL004500), Ae. albopictus (AALFPA_058151), Culex quinquefasciatus (CQUJHB017554) and humans (NP_001952.1)provide evidence that selective insecticide development might be possible (S1 Fig) .The sequence alignment reveals specific amino residues conserved across the mosquito species but not in human, as well as some residues conserved in the anopheline mosquitoes only.When compared to humans, there is a unique sequence region (TNPDQRD) present in all mosquito sequences aligned which is absent in the human sequence.Such unique residues, among others (in S1 Fig) could be exploited for the development of selective and specific insecticides that are not toxic to humans [67], once their functional role have been better studied.Hence, Elf2 might represent a good target to explore for insecticide development.Similarly, this gene could be a good candidate for RNAi-based pesticide vector control strategies.
Heat shock 70kDa protein 1/8 (HSP) significantly reduced the survival of An. gambiae (Figs 3C, 4D-4F).HSP is a molecular chaperone for protein folding, and its induction has been reported to suppress o'nyong-nyong virus (ONNV) infection.Consequently, its knockdown enhances ONNV replication in Anopheles coluzzii [70].Subsequently, when ONNV was coinjected with respective dsRNA targeting HSP, survival was observed to be greatly reduced compared to control groups in which ONNV was coinjected with β-galactosidase.When dsRNA targeting HSP alone was injected in mosquitoes, reduced survival was observed, although not as high as in combination with ONVV.This shows that both the reduced HSP levels and its subsequent effect in increasing ONNV level together resulted in an increased mortality rate [70].Hence, the finding in this study further evidences the essentiality of HSP for the survival of mosquitoes.
Further studies to evaluate the effect of HSP on Plasmodium development would give further insight into how HSP could be manipulated to hamper malaria transmission.HSP has also been shown to be upregulated in DDT-resistant An. funetus in Benin [71].Similarly, AALB008255 in An. albimanus, which is identical to HSP in its carboxyl end was found to be downregulated in P. berghei infection in the mosquito [72].There is a need to further investigate the effect of knockdown of this gene on insecticide resistance as well as Plasmodium infection.HSP has 80% identity with heat shock protein family A (HSP70) member 1A and 1B in human (NM_005345.6, and NM_005346.6)and 72.6% with heat shock cognate 71kDA protein isoform 1 in humans (NM_006597.6).Hence, caution should be taken if it is considered a target for insecticide development.Sequence alignment of HSP70 amino acid residues from An. gambiae (AGAP002076), An. stephensi (ASTEI20_036817), An. funestus (AFUN2_003795), Ae. aegypti (AAEL019403), Ae. albopictus (AALFPA_044680), Culex quinquefasciatus (CQUJHB018229) and humans (NP_006588.1 or NP_005336.3)provide evidence that selective insecticide development might be possible (S2 Fig) .The sequence alignment reveals specific amino residues conserved across the mosquito species but not in human, as well as some residues conserved in these anopheline mosquitoes only.When compared to humans, there is a unique sequence region (APGAG) present in all mosquito sequences aligned that is absent in the human sequences.Such unique residues, among others (in S2 Fig) could be exploited for the development of selective and specific insecticides that are not toxic to humans [67].
The machine learning approach in this study is based on predicting essential genes across eukaryotes at organismal and cellular levels using CLEARER and an orthology based approach.The approach has proven to be useful for predictions of essential genes in Tribolium castaenum, some of which were experimentally validated to be essential [30].The essential genes/proteins in this study could serve as probable targets for selective insecticide development, exploiting unique insect specific amino residues present in the targets compared to their human orthologs.Also, they could be targeted for RNAi biopesticide vector control strategies.In addition, these essential genes can be targeted for gene drive vector control strategies such as Cleave and rescue [73,74], home and rescue gene drive [75].
Other studies are ongoing to develop other models that are conditionally essential.For example, a machine learning model to predict essential development stage and immune response genes in Drosophila melanogaster has been developed [76].This model could be expanded to An. gambiae to predict conditionally essential genes that could be tested experimentally in the future.Still, a limitation of such prediction studies is, of course, that they typically come along with lists of predictions containing several false positives and missing false negatives.This makes it mandatory to follow up with experimental validations to reduce the false positives.Besides this, the analyses performed in this study was carried out using An.gambiae infected with P. berghei.As a future aspect, it would be intriguing observing perturbation studies in An. gambiae infected with P. falciparum.
Conclusion
The machine learning approach was based on data of six model organisms and was applied to genome data of An. gambiae in a top-down approach.It led to new findings independent of prior expert knowledge, making it a valuable alternative to conservative ways to screen for novel targets in vector control.Of the four genes tested in this study, three were observed to be possible targets for vector control.These three genes were non-redundant.This elucidates the importance of combining computational techniques with experimental techniques in finding targets as non-redundant predicted targets might play a crucial for survival or immunity in the organism.This study provides evidence that HSP and Elf2 are important for survival of An. gambiae, as such, they could serve as possible targets for insecticide development or RNAibased biopesticides.Similarly, knockdown of arginase was observed to reduce P. berghei oocytes count in An. gambiae, suggesting arginase as a possible transmission-blocking target in mosquitoes.As such, they could be exploited as targets for disease control. | 8,376.2 | 2024-07-05T00:00:00.000 | [
"Biology",
"Medicine",
"Environmental Science",
"Computer Science"
] |
Plasma distribution around Comet 67P in the last month of the Rosetta mission
Abstract After accompanying comet 67P/Churyumov–Gerasimenko on its journey around the Sun and observing the evolution of its induced magnetosphere throughout the comet’s life-cycle, the Rosetta operations concluded at the end of September 2016 with a controlled impact on the cometary nucleus. At that time, the comet was located more than 3.7 AU from the Sun, but the data still show clear indications of a weak but well developed plasma environment around the nucleus. Rosetta observed this fading cometary magnetosphere along multiple recurring elliptical orbits, which allow us to investigate its properties and spatial structure. We examined the measured electron and neutral densities along these consecutive orbits, from which we were able to determine the structure of the spatial plasma distribution using a simple latitude and longitude dependent model.
Introduction
At 3.6 AU from the Sun, on 6 August 2014, the Rosetta spacecraft has rendezvoused with comet 67P/ Churyumov-Gerasimenko (67P) (Churyumov and Gerasimenko, 1972) and began to monitor its nascent atmosphere as the comet travelled towards its perihelion. The Jupiter-family comet 67P currently has a 6.44 years long orbit around the Sun with an aphelion distance of 5.68 AU and a perihelion distance of 1.24 AU. After accompanying 67P on its journey and observing the evolution of its plasma environment throughout the comet's life-cycle for more than two years, the operations of the Rosetta orbiter concluded on 30 September 2016, at 3.8 AU from the Sun, with a controlled impact on the cometary nucleus. Throughout these two years, the ESA Rosetta mission collected a variety of measurements that provide an immense insight into cometary physics.
Nearing perihelion, the activity of comets rises, and the neutral coma expands Biver et al., 2019). The large number of neutral particles are continuously ionized by photoionization, electron impact ionization and charge exchange with solar wind ions (Mendis et al., 1985;Cravens, 1991;Vigren et al., 2015;Galand et al., 2016;Madanian et al., 2016;Wedlund et al., 2017;Heritier et al., 2018). During the evolution of the cometary coma of 67P, photoionization and electron impact ionization were both shown to be necessary * Corresponding author.
to explain the observed electron densities over the southern, winter hemisphere while over the illuminated, northern hemisphere photoionization alone was reported to dominate the ionization processes Vigren et al., 2016). After perihelion, at large heliospheric distances (2 AU), electron impact ionization dominated over photoionization and was predominant during the last 4 months of the mission on both the southern and the northern hemispheres (Heritier et al., 2018).
An early sign of the cometary plasma environment around comet 67P was observed by through the detection of water ions in the coma on 7 August 2014. At this time, the comet was located 3.6 AU from the Sun and the comet-spacecraft distance was approximately 100 km. The newly created heavy cometary ions are accelerated by the solar wind convective electric field and are picked up by the solar wind flow. As a result of the mass loading of the solar wind with cometary ions, the solar wind suffers an energy loss and is slowed down, piled up and deflected upstream of the comet (Coates, 1997;Szegö et al., 2000) although this close to the nucleus the spacecraft detected only the beginning of the mass loading process, apparent in the deflection of the solar wind ions .
During early activity, the high density plasma in the inner coma was investigated by Yang et al. (2016) who found that comet 67P's https://doi.org/10.1016/j.icarus.2020.113924 Received 4 April 2020; Received in revised form 27 May 2020; Accepted 10 June 2020 early plasma environment at a heliocentric distance of 3.4 AU consisted of two regions: an outer part mostly dominated by the solar wind convection electric field and an inner region of enhanced plasma density.
The evolution of the cometary ion environment was described during early activity in 2014 as the heliocentric distance decreased from 3.6 to 2.0 AU as well as throughout the entirety of the mission . As the activity of the comet increased, the accelerated cometary ions became more common and reached higher energies. In April 2015, the solar wind disappeared from the vicinity of Rosetta -a solar wind cavity formed around the cometary nucleus . Inside the boundary called cometopause, the ion composition changes from a mixture of cometary and solar wind ions to picked-up cometary ions .
In the coma of comet 67P, at relatively large heliocentric distances (2.5 AU), the ion densities fall off with the radial distance from the comet with approximately r −1 based on both photochemical equilibrium and transport dominant models Vigren et al., 2016). In a model, presented by Nemeth (2020), in addition to the transport, production and loss, the effects of the magnetic field gradients were also taken into account. It was shown, that even in the presence of strong magnetic field gradients the plasma density features a r −1 radial dependence, except in the immediate vicinity of the diamagnetic cavity boundary. Edberg et al. (2015) reported a r −1 dependence of the electron densities based on Rosetta measurements performed in early 2015 within 260 km from the nucleus. These results also agree with the observations made at comet 1P/Halley during the Giotto mission (Cravens, 1987).
This observed vertical cometary density profile has been confirmed down to about 3 km from the nucleus surface with the observations made on the last day of operations (30 September 2016), during the controlled descend of the Rosetta orbiter (Heritier et al., 2018), using the combined measurements of the Mutual Impedance Probe (RPC MIP) and the Langmuir Probe (RPC LAP) instruments of the Rosetta Plasma Consortium (Carr et al., 2007). The findings were in a close agreement with cometary vertical ionosphere models predicting a maximum in the ionospheric densities close to the surface (Vigren and Galand, 2013) and a sharp decrease below this ionospheric peak .
Rosetta offers the unique opportunity to observe the fading cometary plasma environment in September 2016 through several similar, consecutive orbits. Our aim in this paper is to map the plasma environment around the nucleus of comet 67P through the electron densities measured by the RPC MIP experiment during the last month of the Rosetta mission. Our findings are explained and summarized by a distance, latitude and longitude dependent model of the plasma density of comet 67P.
Data
We investigated the spatial distribution of the cometary plasma around comet 67P in September 2016, more than one year after perihelion. At that time the comet was located at 3.7-3.8 AU, with sub-solar latitudes around 18-20 • on the northern hemisphere (Preusker et al., 2017). The Rosetta spacecraft had a highly elliptical orbit at 4-17 km from the nucleus with periods of approximately 3 days (Fig. 1). The nucleus had a rotation rate of 12.4 h. The top panel of the figure shows the trajectory in comet-centred solar equatorial (CSEQ) coordinates (the +X axis points from centre of mass of the nucleus towards the Sun, the +Z axis is the component of the Sun's north pole of date orthogonal to the +X axis, the +Y axis completes the right-handed reference frame). The bottom panel uses the body-fixed 67P/C-G_CK coordinate frame. (The origin of the frame is located in the centre of the comet, the +X axis points towards the prime meridian, the +Z axis towards the north pole while the +Y axis completes the right handed frame.) During this month, Rosetta performed eight very similar, consecutive orbits around comet 67P, suitable to perform a comprehensive 3D mapping of the cometary ionosphere.
We show the total electron density measured by RPC MIP and the total neutral density as measured by the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) on Fig. 2. The ROSINA instrument contains two sensors to determine the composition of the comet's atmosphere and ionosphere, the velocities of electrified gas particles, and reactions in which they take part (Balsiger et al., 2007). The main objective of the MIP experiment is to provide in situ electron density and temperature in the inner coma of 67P through the measurement of the mutual impedance between two electric dipoles embedded within the plasma to be investigated . The MIP sensor is made of two receiving and two transmitting electrodes, mounted on a 1 m long bar, itself mounted on a boom on the Rosetta orbiter. The instrument is capable of measuring plasma properties in two different operational modes. First, the so-called ''Short Debye Length'' mode (SDL), uses different combinations of a single or of the two MIP transmitters to access dense enough plasmas. Second, the so-called ''Long Debye length mode'' (LDL) uses the spherical probe of the LAP experiment, mounted on another boom and located 4 m from the MIP Z. Nemeth et al. antenna, as a monopolar transmitter. This LDL mode has been used to access lower electron densities than those accessible with the SDL mode, down to a few tens of cm −3 . During September 2016, MIP operated essentially in SDL mode, measuring densities up to thousands of cm −3 . The uncertainty of the measured electron density is estimated to be around 10%. The uncertainty is obtained from (i) the frequency discretization of the mutual impedance spectra and (ii) the stiffness of the spectral resonant signal in the mutual impedance spectra used to retrieve the electron density. Detailed information on the computation of the RPC-MIP plasma density uncertainty -as well as explanations on possible data gaps in the electron density due to the used MIP operation mode -is given in the RPC-MIP User guide and reference therein, available in the Planetary Science Archive RPC-MIP archive .
Since the length scales over which we study the density distribution is much larger than the Debye length, we assume quasi-neutrality and take the MIP electron density results as a measure of the overall plasma density. Second, since the solar wind density at 3.8 AU is much smaller than the plasma density measured by MIP around 67P, we assume these measurements correspond to the overall cometary plasma density.
The plasma and neutral density curves in Fig. 2 feature clear periodicity corresponding to the orbital period of the spacecraft, but the signal is complex, not at all symmetric around the position of the closest approach to the nucleus. In addition to the main recurring peak, the data also show recurring fine structure. On the top and middle panels of Fig. 2 we also show the spacecraft's radial distance from the nucleus and its latitude and longitude in the body-fixed 67P/C-G_CK coordinate frame.
On the first days of September 2016 a corotating interaction region (CIR) impacted on the comet and disrupted the measured electron densities . In order to concentrate on the unperturbed cometary plasma, the present investigation focuses on the measurements from 4 September 2016 to 24 September 2016 (Fig. 2), before the spacecraft manoeuvred itself to collision course with the cometary nucleus.
By investigating the position of the measurements with respect to the surface of the nucleus, we can conclude that both the measured electron and neutral densities show a maximum at the southern hemisphere, after that the density falls off rapidly shortly before the spacecraft enters the northern hemisphere. On the top panel of Fig. 3, we show a projection of the trajectories onto the terminator plane in comet-centred solar equatorial (CSEQ) coordinates. We observe that the higher density measurements occur when the spacecraft is close to the nucleus, but the high density region is offset towards the negative z region. The bottom panel of Fig. 3 shows the data in comet fixed coordinates; the southern hemisphere clearly dominates. Although at this time the subsolar point is located at the northern hemisphere, the active regions (for cometary neutral production) were reported to be above the southern hemisphere during this period. Hansen et al. (2016) presented water distribution around the nucleus at 1.5 AU after perihelion with a maximum above the southern hemisphere, around latitudes −30 • . Kramer et al. (2017) showed how the highest neutral density regions 100 km above the nucleus shift from the northern to the southern hemisphere between April 2015 and May 2016. In May 2016, the highest density regions were above latitudes around −60 • and longitudes of −10 • . Combi et al. (2020) investigated H 2 O, CO 2 , CO and O 2 production rates throughout the Rosetta mission. They confirmed the H 2 O production rates to be dominant during the mission, except from mid-2016 where CO 2 gradually became dominant over all other species, its activity peaking at the southern hemisphere. As the main source of cometary plasma is the neutral outgassing of the nucleus, a strong correlation between the neutral and electron densities is expected.
Model and discussion
Figs. 2 and 3 show that although the radial distance plays an important role in the plasma density variation, it cannot be the sole player responsible for the observed structures. It is a reasonable hypothesis that the plasma density depends on the latitude and longitude coordinates in comet fixed frame. This hypothesis is supported by earlier results, e.g. Hansen et al. (2016) has shown that the neutral density features such angle dependence. The strongly non-spherical shape of the comet nucleus (Preusker et al., 2015;Jorda et al., 2016) and the solar-wind comet interactions (Deca et al., 2017(Deca et al., , 2019Koenders et al., 2016;Huang et al., 2016Huang et al., , 2018 can also influence the density distribution. In this section, we aim at providing a distance, latitude and longitude dependent model of the plasma density of comet 67P, which is able to reproduce the observed cometary data. Since for these highly excentric trajectories the vicinity of closest approach is associated to a fast latitude scan, it is possible that the rapid change in latitude is responsible for the drastic variation (strongest peaks followed by very low densities in Fig. 2) found close to the nucleus. Fig. 3 qualitatively supports this hypothesis. In addition to the highly apparent slow periodicity, the data in Fig. 2 also shows fine structures (secondary and sometimes higher order peaks before the main peaks for each orbit, see e.g. Sept. 8, 11, 14 and 17 in Fig. 2). These seem to follow the rotation period of the nucleus, which suggests that the plasma distribution may be best modelled in a comet fixed coordinate system. Thus, we modelled the 3D spatial distribution of cometary electrons and plasma around comet 67P in September 2016 in comet fixed spherical polar coordinates. We fitted by least squares method the parameters of the following test function ( , , ) to the in situ measured electron densities: where is the distance from the comet, is a constant corresponding to the angle averaged mean electron density on a hypothetical spherical source surface one kilometre over the centre of the comet. The angles and are the latitude and longitude of the spacecraft in the comet-fixed 67P/C-G_CK frame. This is the simplest possible expression, which describes a smooth partial angle dependence for both angle coordinates together with a −1 radial decay. The function describes the 3D cometary plasma distribution surprisingly well. The expression in the first parenthesis determines the latitudinal behaviour of the electron density. Here measures the relative weight of the latitude dependent part, 0 is the latitude where the electron density has a maximum. The expression in the second parenthesis determines the longitudinal behaviour of the density, where gives the relative weight of the longitudinal variations and 0 is the longitude where the electron density has a maximum.
If we carefully examine the density curve shown in the bottom panel of Fig. 2, we find that there is a decreasing trend, the recurring structures have generally diminishing magnitudes. This feature can be easily understood by taking into account the diminishing activity of the comet. The simple Ansatz presented in Eq. (1) cannot capture this feature, and thus the model in its simplest form strongly overestimates the last two structures (19-23 Sep) of the density curve. We overcome this problem by making our k parameter dependent on the distance from the Sun (primarily the Sun-comet distance ( ) determines the cometary activity). According to Hansen et al. (2016), the production rate suffers approximately a tenfold decrease in every 0.58-0.6 AU travelled away from the Sun. In the last month the comet moved from 3.68 to 3.84 AU, which suggests that the activity was almost halved during this period. We can take into account this factor by defining ( ) as where 0 = 3.68 AU, = 0.6 AU and 0 = ( 0 ) is the new constant parameter to fit. The quality of the fit depends only slightly on the value of D, similar results can be achieved if we choose the parameter anywhere from the 0.5 AU < < 1.2 AU range. We fitted the density measurements by inserting the time variation of the ( , , ) coordinates of the spacecraft into the simple function presented in Eq. (1), and used Eq. (2) to take into account the influence of the changing Sun-comet distance ( ) on the value of = ( ). Fig. 4 shows the very good agreement between the model (red curve) and the MIP cometary plasma density in situ measurements (black). After combining Eqs. (1) and (2), the final form of the model can be written as (3) and we used 0 = 3.68 AU and = 0.6 AU as above. We do not expect such a simple model to account for all the short scale features observed in the measurements, which can be associated with local plasma dynamics and/or variations in solar wind forcing. However, the model reflects the large-scale behaviour very well, in particular the main periodicity, the abrupt drops after the main density peaks, and also the presence of secondary peaks next to the main peaks. Moreover, it fits well both the peak widths and amplitudes. The amplitudes and sometimes the positions of the third and fourth peaks show significant deviations, which are probably due to a more complex source structure than the simple first order angle dependence we used. The amplitude of the main peaks are usually somewhat underestimated by this first-order model. This means that the angular distribution features a sharper (higher-order) peak over the highest activity source region.
We assume a single smoothly varying source region in our model, from which the majority of the ionized particles originate. The fact that this simple assumption describes the density distribution so well probably means that most of the small scale density variations are smoothed out before the gas and the plasma reach the sampled altitudes. This does not require a collisional process, since the measured density is the sum of the contributions of all the individual sources. If the measurement is performed far enough from the sources (the distance from the surface is much larger than the source separation) then all the sources are summed up with similar geometric attenuation factors, and the result will be a smooth function reflecting the average source strength. (In contrast, close to the surface, material sources closest to the spacecraft would dominate the measurements, but the 4 km minimum altitude of our orbits ensure an already significant averaging. As the main peaks in the data occur close to the surface, some traces of the low altitude inhomogeneities show up as deviations from the model near the main peaks.) Since our model captures the main features of the plasma density structure the observed deviations can be used to investigate the fine structure caused by transient or local effects. Such effects can be for example the local spatial variations due to the fine structure of the source or the temporal variation of the ionization rates or even solar wind transients. The event on 17 September is the most significant example of such deviations, we are currently investigating its possible cause. The only peculiarity of the 17 September event revealed so far is an excess of suprathermal electrons. All the other plasma density peaks are accompanied by a depletion and cooling of the suprathermal electrons, which can be expected as these events take place in the densest region of the neutral atmosphere and the electrons are cooled by neutral collisions. The excess on 17 September is indicative of a singular process replenishing the suprathermal electron component. The anomalous increase in plasma density observed on Sept 17th is therefore likely caused by an increase of electron impact ionization associated with this excess of suprathermal electrons, as reported for different periods in previous studies Heritier et al., 2018) In agreement with previous results based on measurements from earlier phases of the comet's lifetime Edberg et al., 2015;Vigren et al., 2016;Nemeth, 2020), the electron density falls off with approximately r −1 in the fading coma of comet 67P. This r −1 dependence of the electron density is a remarkably persistent feature of the cometary environment everywhere, where the energy density of the cometary plasma dominates over that of the solar wind. Further away form the nucleus, where the effects of the solar wind dominate, this rule is not expected to hold. Behar et al. (2018) created a semianalytical model of this region, in which the transition from new born ions into pick-up ions is treated as a loss term for the newborn ion population. Nilsson et al. (2018) interpreted the energy spectra of the pick-up ions in terms of their source region ion density, which appeared to fall off as r −2 in accordance with the expected production rate.
It is important to note that the validity of using separate radial and angular variables in our analytical model is equivalent with the angular structure being independent of the radial distance. This means that the plasma motion is essentially radial at the distances considered here, irrespectively of the location in latitude and longitude.
The electron density features a maximum in the southern hemisphere, the best fit to the measured MIP data is achieved when we set the location of the maximum of the density around the south pole. This result agrees well with the findings of investigations of the neutral density after perihelion Kramer et al., 2017;Combi et al., 2020) that found an active southern hemisphere and Z. Nemeth et al. showed the separation of the sub-solar point and the highest density areas above the comet. According to the findings of Combi et al. (2020) during the last months of the Rosetta mission the dominant CO 2 surface activity distribution showed strong latitudinal dependence with a maximum at latitudes around 90 • in the southern hemisphere and a longitudinal dependence with a faint maximum at longitudes around 0 • . Kramer et al. (2017) reported that in May 2016 the neutral densities had a maximum above longitudes around −10 • . In agreement with this we have found an electron density maximum at 0 = −15 • in our model. Values between −30 • and 0 • give similar results.
This study shows that the latitude plays a very important role in the density distribution: the high = 0.76 latitudinal modulation amplitude means that the density over the north pole is only 14% of the density over the south pole, the ratio of the two values is (1 − 0.76)∕(1 + 0.76) ≈ 0.14. In contrast, the longitudinal position influences the density only slightly, with an = 0.13 modulation amplitude. Thus the minimum in longitude is 77% of the maximum since (1 − 0.13)∕(1 + 0.13) ≈ 0.77.
A radial distance -latitude map of the model density distribution is shown in Fig. 5, to be compared to Fig. 3. The model explains the cometary plasma densities measured along the Rosetta orbiter trajectories very well. Fig. 6 is a longitude-latitude map of the electron density 4 km over the centre of the nucleus (top panel). This 4 km altitude is the minimum altitude sampled in this time period, but according to Heritier et al. (2017), this altitude also coincide with the height featuring the peak ionospheric density. The bottom panel projects the density contours onto a map (El-Maarry et al., 2016) showing surface features and regions of 67P. The highest densities were measured over the Bes region, while the lowest activity corresponds to Seth.
These maps show the plasma distribution in comet fixed coordinates. Since at this time of the mission both the neutral flow and the plasma is tenuous, the bulk motion of plasma particles points radially outwards from the cometary nucleus in inertial frame. This means that in comet fixed coordinates they move along slightly bent trajectories. Since close to the nucleus the radial flow speed is much larger (∼500-1000 m/s, Hansen et al., 2016) than the apparent tangential speed (∼2 m/s at 15 km from the comet) in the comet fixed frame, this effect does not change the picture described above; close to the nucleus the plasma motion can be assumed to be approximately radial in comet fixed frame as well. In the 4-15 km radial range of our study we see a plasma cloud radially expanding with respect to the comet and preserving the original latitude-longitude distribution of the source surface.
Conclusions
Near the end of the Rosetta orbiter operations, although comet 67P was more than 3.7 AU from the Sun, in situ measurements still show clear signs of a weak but well defined cometary plasma environment. During the last month of the Rosetta operations, in September 2016, the spacecraft moved along a periodic, recurrent orbit that made it possible to study the 3D spatial distribution of the plasma density near the nucleus. In this paper, we derived a simple and useful model to explain the plasma density distribution in the coma of comet 67P in September 2016.
Based on in situ MIP electron density measurements we defined a simple distance, latitude and longitude dependent first order cosine function to model the 3D spatial distribution of the cometary plasma. The model features a −1 dependence on the distance from the centre of the nucleus. It slightly depends on the Sun comet distance as well, because the cometary activity diminishes as the comet moves away from the Sun. A remarkable advantage of this model is that the four variables of interest are separated, thus showing the role of each independent variable in the 3D mapping of the cometary ionosphere.
This 3D cometary plasma density distribution model reproduced the Rosetta MIP observations remarkably well. The model reflects the observed structures in the plasma density distribution, in particular the main periodicity, the abrupt drops after the main peaks, even the presence of secondary peaks next to the main peaks; it fits well the peak widths as well as the amplitudes. We trust that this first-order 3D model of the cometary ionosphere of 67P will also make it possible to better understand the local plasma dynamics identified as local discrepancies between the Rosetta plasma observation and the model described in this work.
The plasma density distribution shows a strong latitudinal dependence: the plasma density is highest above the southern hemisphere. This is consistent with the neutral density observations after the comet's perihelion passage Kramer et al., 2017;Combi et al., 2020). Indeed, the southern, nightside hemisphere produces more plasma than the sunlit northern hemisphere -mostly due to the higher neutral outgassing rates. Our model shows that the plasma density can be described well by assuming only a single plasma source in longitudes around −15 • . This also correlates with the findings of Kramer et al. (2017) who found that in May 2016 the neutral densities had a maximum above longitudes around −10 • . | 6,448.4 | 2020-11-01T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Marginal integration for nonparametric causal inference
We consider the problem of inferring the total causal effect of a single variable intervention on a (response) variable of interest. We propose a certain marginal integration regression technique for a very general class of potentially nonlinear structural equation models (SEMs) with known structure, or at least known superset of adjustment variables: we call the procedure S-mint regression. We easily derive that it achieves the convergence rate as for nonparametric regression: for example, single variable intervention effects can be estimated with convergence rate $n^{-2/5}$ assuming smoothness with twice differentiable functions. Our result can also be seen as a major robustness property with respect to model misspecification which goes much beyond the notion of double robustness. Furthermore, when the structure of the SEM is not known, we can estimate (the equivalence class of) the directed acyclic graph corresponding to the SEM, and then proceed by using S-mint based on these estimates. We empirically compare the S-mint regression method with more classical approaches and argue that the former is indeed more robust, more reliable and substantially simpler.
Introduction
Understanding cause-effect relationships between variables is of great interest in many fields of science. An ambitious but highly desirable goal is to infer causal effects from observational data obtained by observing a system of interest without subjecting it to interventions. 1 This would allow to circumvent potentially severe experimental constraints or to substantially lower experimental costs. The words "causal inference" (usually) refer to the problem of inferring effects which are due to (or caused by) interventions: if we make an outside intervention at a variable X, say, what is its effect on another response variable of interest Y . We describe some examples in Section 1.3. It is well known that "association is not equal to causation". Hence, the tools for inferring causal effects are different from regression methods, but as we will argue, the regression methods when properly applied remain a useful tool for causal inference. Various fields and concepts have contributed to the understanding and quantification of causal inference: the framework of potential outcomes and counterfactuals (Rubin, 2005, cf.), see also Dawid (2000), structural equation modeling (Bollen, 1998, cf.), and graphical modeling (Lauritzen and Spiegelhalter, 1988;Greenland et al., 1999, cf.); the book by Pearl (2000) provides a nice overview.
We consider aspects of the problem indicated above, namely inferring intervention or causal effects from observational data without external interventions. Thus, we deal (in part) with the question how to infer causal effects without relying on randomized experiments or randomized studies. Besides fundamental conceptual aspects, as treated for example in the books by Pearl (2000), Spirtes et al. (2000) and Koller and Friedman (2009), important issues include statistical tasks such as estimation accuracy and robustness with respect to model misspecification. This paper is focusing on the two latter topics, covering also high-dimensional sparse settings with many variables (parameters) but relatively few observational data points. We make use of a marginal integration regression method which has been proposed for additive regression modeling (Linton and Nielsen, 1995). Its use in causal inference is novel and our main result (Theorem 1) establishes optimal convergence properties and justifies the method as a fully robust procedure against model misspecification, as explained further in Section 1.2.
Basic concepts and definitions for causal inference
We very briefly introduce some of the basic concepts for causal inference (and the reader who is familiar with them can skip this subsection). We consider p random variables X 1 , . . . , X p , where one of them is a response variable Y of interest and one of them an intervention variable X, that is, the variable where we make an external intervention by setting X to a certain value x. Such an intervention is denoted by Pearl's do-operator do(X = x) (Pearl, 2000, cf.). We denote the indices corresponding to Y and X by j Y and j X , respectively: thus, Y = X j Y and X = X j X . We assume a setting where all relevant variables are observed, i.e., there are no relevant hidden variables. 2 The system of variables is assumed to be generated from a structural equation model (SEM): X j ← f j (X pa(j) , ε j ), j = 1, . . . , p.
(1) Thereby, ε 1 , . . . , ε p are independent noise (or innovation) variables, and there is an underlying structure in terms of a directed acyclic graph (DAG) D, where each node j corresponds to the random variable X j : We denote by pa(j) = pa D (j) the set of parents of node j in the underlying DAG D, 3 and f j (·) are assumed to be real-valued (measurable) functions. For any index set U ⊆ {1, . . . , p} we write X U := (X v ) v∈U , for example X pa(j) = (X v ) v∈pa(j) . The causal mechanism we are interested in is the total effect of an intervention at a single variable X on a response variable Y of interest. 4 The distribution of Y when doing an external intervention do(X = x) by setting variable X to x is denoted with its density (assumed to exist) or discrete probability function by p(y|do(X = x)). The mathematical definition of p(y|do(X = x)) can be given in terms of a so-called truncated Markov factorization or maybe more intuitively, by direct plug-in of the intervention value x for variable X and propagating this intervention value x to all other random variables including Y in the structural equation model (1); precise definitions are given in e.g. Pearl (2000) or Spirtes et al. (2000). The underlying important assumption in the definition of p(y|do(X = x)) is that the functional forms and error distributions of the structural equations for all the variables X j which are different from X do not change when making an intervention at X.
A very powerful representation of the intervention distribution is given by the well-known backdoor adjustment formula. 5 We say that a path in a DAG D is blocked by a set of nodes S if and only if it contains a chain .. → m → .. or a fork .. ← m → .. with m ∈ S or a collider .. → m ← .. such that m ∈ S and no descendant of m is in S. Furthermore, a set of variables S is said to satisfy the backdoor criterion relative to (X, Y ) if no node in S is a descendant of X and if S blocks every path between X and Y with an arrow pointing into X. For a set S that satisfies the backdoor criterion relative to (X, Y ), the backdoor adjustment formula now reads: where p(·) and P (·) are generic notations for the density or distribution (Pearl, 2000, Theorem 3.3.2). An important special case of the backdoor adjustment formula is obtained when considering the adjustment set S = pa(j X ): if j Y / ∈ pa(j X ), that is, Y is not in the parental set of the variable X, then: p(y|do(X = x)) = p(y|X = x, X pa(j X ) )dP (X pa(j X ) ).
( 3) Thus, if the parental set pa(j X ) is known, the intervention distribution can be calculated from the standard observational conditional and marginal distributions. Our main focus is the expectation of Y when doing the intervention do(X = x), the so-called total effect: E[Y |do(X = x)] = y p(y|do(X = x))dy.
A general and often used route for inferring E[Y |do(X = x)] is as follows: the directed acyclic graph (DAG) corresponding to the structural equation model (SEM) is either known or (its Markov equivalence class) estimated from data; building on this, one can estimate the functions in the SEM (edge functions in the DAG), the error distributions in the SEM, and finally extract an estimate of E[Y |do(X = x)], or bounds of this quantity if the DAG is not identifiable from the observational distribution. See for example Spirtes et al. (2000); Pearl (2000); Maathuis et al. (2009);Spirtes (2010).
Our contribution
The new results from this paper should be explained for two different scenarios and application areas: one where the structure of the DAG D in the SEM is known, and the other where the structure and the DAG D are unknown and estimated from data. Of course, the second setting is linked to the first by treating the estimated as the true known structure. However, due to estimation errors, a separate discussion is in place.
Structural equation models with known structure
We consider a general SEM as in (1) with known structure in form of a DAG D but unknown functions f j and unknown error distributions for ε j . As already mentioned before, our focus is on inferring the total effect where p(y|do(X = x)) is the interventional density (or discrete probability function) of Y as loosely described in Section 1.1. The first approach to infer the total effect in (4) is to estimate the functions f j and error distributions for ε j , and it is then possible to calculate E[Y |do(X = x)], typically using some path based method based on the DAG D (see also Section 3.1). This route is essentially impossible without putting further assumptions on the functional form of f j in the SEM (1). For example, one often makes the assumption of additive errors, and if the cardinality of the parental set |pa(j)| is large, some additional constraints like additivity of a nonparametric function is in place to avoid the curse of dimensionality. Thus, by keeping the general possibly non-additive structure of the functions f j in the SEM, we have to reject this approach.
The second approach for inferring the total effect in (4) relies on the powerful backdoor adjustment formula in (2). At first sight, the problem seems ill-posed because of the appearance of E[Y |X = x, X S ] for a set S with possibly large cardinality |S|. But since we integrate over the variables X S in (2), we are not entering the curse of dimensionality. This simple observation is a key idea of this paper. We present an estimation technique for E[Y |do(X = x)], or other functionals of p(y|do(X = x)), using marginal integration which has been proposed and analyzed for additive regression modeling (Linton and Nielsen, 1995). The idea of marginal integration is to first estimate a fully nonparametric regression of Y versus X and the variables X S from a valid adjustment set satisfying the backdoor criterion (for example the parents of j X or a superset thereof) and then average the obtained estimate over the variables X S . We call the procedure "S-mint" standing for marginal integration with adjustment set S. Our main result in Theorem 1 establishes that E[Y |do(X = x)] can be inferred via marginal integration with the same rate of convergence as for one-dimensional nonparametric function estimation for a very large class of structural equation models with potentially non-additive functional forms in the equations. Therefore, we achieve a major robustness result against model misspecification, as we assume only some standard smoothness assumptions but no further conditions on the functional form or nonlinearity of the functions f j in the SEM, not even requiring additive errors. Our main result (Theorem 1) also applies using a superset of the true underlying DAG D (i.e. there might be additional directed edges in the superset), see Section 2.3. For example, such a superset could arise from knowing the order of the variables (e.g. in a time series context), or an approximate superset might be available from estimation of the DAG where one wouldn't care too much about slight or moderate overfitting.
Inferring E[Y |do(X = x)] under model-misspecification is the theme of double robustness in causal inference (van der Laan and Robins, 2003, cf.). There, misspecification of either the regression or the propensity score model is allowed but at least one of them has to be correct: the terminology "double robustness" is intended to reflect this kind of robustness. In contrast to double robustness, we achieve here "full robustness" where essentially any form of "misspecification" is allowed, in the sense that S-mint does not require any specification of the functional form of the structural equations in the SEM. More details are given in Section 2.4.
The local nature of parental sets. Our S-mint procedure requires the specification of a valid adjustment set S: as described in (3), we can always use the parental set pa(j X ) if j Y / ∈ pa(j X ). The parental variables are often an interesting choice for an adjustment set which corresponds to a local operation. From a computational viewpoint, determining the parental set is very efficient (see Section 4). Furthermore, as discussed below, the local nature of the parental sets can be very beneficial in presence of only approximate knowledge of the true underlying DAG D.
Structural equation models with unknown structure
Consider the SEM (1), but now we assume that the DAG D is unknown. For this setting, we propose a two-stage scheme ("est S-mint", Section 3.5). First, we estimate the structure of the DAG (or the Markov equivalence class of DAGs) or the order of the variables from observational data. To do this, all of the current approaches make further assumptions for the SEM in (1). See for example Chickering (2002); Teyssier and Koller (2005); Shimizu et al. (2006); Kalisch and Bühlmann (2007); Schmidt et al. (2007);Hoyer et al. (2009);Shojaie and Michailidis (2010); Bühlmann et al. (2014).
We can then infer E[Y |do(X = x)] as before with S-mint model fitting, but based on an estimated (instead of the true) adjustment set S. This seems often more advisable than using the estimated functions in the SEM, which are readily available from structure estimation, and pursuing a path based method with the estimated DAG. Since estimation of (the Markov equivalence class of) the DAG or of the order of variables is often very difficult and with limited accuracy for finite sample size, the second stage with S-mint model fitting seems fairly robust with respect to errors in order-or structure-estimation and model misspecification, as suggested by our empirical results in Section 5.3. Therefore, such a two-stage procedure with structure-or order-search 6 and subsequent marginal integration leads to reasonably accurate results. We only have empirical results to support this accuracy statement.
As mentioned above, due to their local nature, the parental sets (or subsets thereof) are often a very good choice in presence of estimation errors for inferring the true DAG (or equivalence class thereof): instead of assuming high accuracy for recovering the entire (equivalence class of the) DAG, we only need to have a reasonably accurate estimate of the much smaller and local parental set. Section 5 reports that such a two-stage approach, with S-mint modeling in the second stage, can outperform the direct CAM method (which is based on, or assuming, an additive SEM), at least if the sample size is small or moderate only. 7
The scope of possible applications
Genetic network inference is a prominent example where causal inference methods are used; mainly for estimating an underlying network in terms of a directed graph (Smith et al., 2002;Husmeier, 2003;Friedman, 2004;Yu et al., 2004, cf.). The goal is very ambitious, namely to recover relevant edges in a complex network from observational or a few interventional data. This paper does not address this issue: instead of recovering a network (structure), inferring total causal or intervention effects from observational data is a different, maybe more realistic but still very challenging goal in its full generality. Yet making some progress can be very useful in many areas of applications, notably for prioritizing and designing future randomized experiments which have a large total effect on a response variable of interest, ranging from molecular biology and bioinformatics (Editorial Nat. Methods, 2010) to many other fields including economy, medicine or social sciences. Such model-driven prioritization for gene intervention experiments in molecular biology has been experimentally validated with some success (Maathuis et al., 2010;Stekhoven et al., 2012).
We will discuss an application from molecular biology on a rather "toy-like" level in Section 6. Despite all simplifying considerations, however, we believe that it indicates a broader scope of possible applications. When having approximate knowledge of the parental set of the variables in a potentially large-scale system, one wouldn't need to worry much about the underlying form of the dependences of (or structural equations linking) the variables: for quantifying the effect of single variable interventions, a specific marginal integration estimator converges with the univariate rate, as stated in (the main result) Theorem 1.
Quantifying single variable interventions from observational data is indeed a useful first step. Further work is needed to address the following issues: (i) inference in settings with additional hidden, unobserved variables (Spirtes et al., 2000;Zhang, 2008;Shpitser et al., 2011;Colombo et al., 2012, cf.); (ii) inference based on both observational and interventional data (He and Geng., 2008;Hauser and Bühlmann, 2012, 2015; and finally (iii) developing sound tools and methods towards more confirmatory conclusions. The appropriate modifications and further developments of our new results (mainly Theorem 1) towards these points (i)-(iii) are not straightforward. In view of this, and due to the ambitious goal to draw causal conclusions, all obtained results in applications should be interpreted with care -but we believe that even limited progress within a proper framework of causal inference often leads to better results than sticking to the conceptually wrong framework of marginal correlations or associations from regression.
Causal effects for general nonlinear systems via backdoor adjustment: marginal integration suffices
We present here the, maybe surprising, result that marginal integration allows us to infer the causal effect of a single variable intervention with a convergence rate as for one-dimensional nonparametric function estimation in essentially any nonlinear structural equation model. We assume a structural equation model (as already introduced in Section 1.1) where ε 1 , . . . , ε p are independent noise (or innovation) variables, pa(j) denotes the set of parents of node j in the underlying DAG D 0 , and f 0 j (·) are real-valued (measurable) functions. We emphasize the true underlying quantities with a superscript " 0 ". We assume in this section that the DAG D 0 , or at least a (super-) DAG D 0 super which contains D 0 (see Section 2.3), is known. As mentioned earlier, our goal is to give a representation of the expected value of the intervention distribution E[Y |do(X = x)] for some variables Y, X ∈ {X 1 , . . . , X p }. That is, Y and X are two variables of interest where we do an intervention at X and want to study its effect on Y . Let S be a set of variables satisfying the backdoor criterion relative to (X, Y ), see Section 1.1. We repeat to write the backdoor adjustment formula (2) where p(·) and P (·) are generic notations for the density or distribution. Assuming that we can interchange the order of integration (cf. part 6. of Assumption 1) we obtain This is a function depending on the one-dimensional variable x only and therefore, intuitively, its estimation shouldn't be much exposed to the curse of dimensionality. We will argue below that this is indeed the case.
Marginal integration
Marginal integration is an estimation method which has been primarily designed for additive and structured regression fitting (Linton and Nielsen, 1995). Without any modifications though, it is also suitable for estimation of E[Y |do(X = x)] in (6). Let S be a set of variables satisfying the backdoor criterion relative to (X, Y ) (cf. Section 1.1) and denote by s the cardinality of S. We use a nonparametric kernel estimator of the multivariate regression function m(x, where K and L are two kernel functions and h 1 , h 2 the respective bandwidths, i.e., We obtain the partial local linear estimator at (x, x S ) asm(x, x S ) =α. We then integrate over the variables x S with the empirical mean and obtain: This is a locally weighted average, with localization through the one-dimensional variable x. For our main theoretical result to hold, we assume the following.
1. The variables X S have a bounded support supp(X S ).
2. The regression function m(u, u S ) = E[Y |X = u, X S = u S ] exists and has bounded partial derivatives up to order 2 with respect to u and up to order d with respect to u S for u in a neighborhood of x and u S ∈ supp(X S ). 3. The variables X, X S have a density p(., .) with respect to Lebesgue measure and p(u, u S ) has bounded partial derivatives up to order 2 with respect to u and up to order d with respect to u S . In addition, it holds that 4. The kernel functions K and L are symmetric with bounded supports and L is an order-d kernel.
Note that part 6. of Assumption 1 is only needed for interchanging the order of integration in (6). Due to the bounded support of the variables X S it is not overly restrictive.
As a consequence, the following result from Fan et al. (1998) establishes a convergence rate for the estimator as for one-dimensional nonparametric function estimation.
Theorem 1. Suppose that Assumption 1 holds for a set S satisfying the backdoor criterion relative to (X, Y ) in the DAG D 0 from model (5), and suppose that s < d. Then, if the bandwidths are chosen such that they satisfy Proof. The statement follows from Fan et al. (1998, Theorem 1 and Remark 3).
Theorem 1 establishes the convergence rate O(n −2/5 ), choosing h 1 n −1/5 : this matches the optimal rate for estimation of one-dimensional smooth functions having second derivatives, and such a smoothness with respect to the variable x is assumed in part 2. of Assumption 1. Thus, the implication is the important robustness fact that for any potentially nonlinear structural equation model satisfying the regularity conditions in Theorem 1, we can estimate the expected value of the intervention distribution as well as in nonparametric estimation of a smooth function with one-dimensional argument. We note, as mentioned already in Section 1.2.1, that it would be essentially impossible to estimate the functions f j in (1) in full generality: interestingly, when focusing on inferring the total effect E[Y |do(X = x)], the problem is much better posed as demonstrated with our concrete S-mint procedure. Furthermore, with the (valid) choice S = pa(j X ) or an (estimated) superset thereof, one obtains a procedure that is only based on local information in the graph: this turns out to be advantageous, see also Section 1.2.1, particularly when the underlying DAG structure is not correctly specified (cf. Section 5.3). We will report about the performance of such an S-mint estimation method in Sections 4 and 5. Note that the rate of Theorem 1 remains valid (for a slightly modified estimator) if we allow for discrete variables in the parental set of X (cf. Fan et al. (1998)).
It is worthwhile to point out that S-mint becomes more challenging for inferring multiple variable interventions such as E[Y |do(X 1 = x 1 , X 2 = x 2 )]: the convergence rate is then of the order n −1/3 for a twice differentiable regression function.
Remark 1. Theorem 1 generalizes to real-valued transformations t(·) of Y . By using the argument as in (6) and replacing part 6. of Assumption 1 by the corresponding statement for t(Y ), we obtain For example, for t(y) = y 2 we obtain second moments and we can then estimate with the same convergence rate as for onedimensional nonparametric function estimation using marginal integration of t(Y ) versus X, X S .
Implementation of marginal integration
Theorem 1 justifies marginal integration as in (8) asymptotically. One issue is the choice of the two bandwidths h 1 and h 2 : we cannot rely on cross-validation because E[Y |do(X = x)] is not a regression function and is not linked to prediction of a new observation Y new , nor can we use some penalized likelihood techniques with e.g. BIC since E[Y |do(X = x)] does not appear in the likelihood. Besides the difficulty to choose the smoothing parameters, we think that addressing such a smoothing problem will become easier, at least in practice, when using some iterative boosting approach (Friedman, 2001;Bühlmann and Yu, 2003, cf.).
We propose here a scheme which we found to be most stable in extensive simulations. The idea is to elaborate on the estimation of the function m(x, x S ) = E[Y |X = x, X S = x S ], from a simple starting point to more complex estimates, while the integration over the variables X S is done with the empirical mean as in (8).
We start with the following simple but useful result.
Proposition 1. If pa(j X ) = ∅ or if there are no backdoor paths from j X to j Y in the true DAG D 0 from model (5), then Proof. If there are no backdoor paths from j X to j Y , the empty set S = ∅ satisfies the backdoor criterion relative to (X, Y ). The statement then directly follows from the backdoor adjustment formula (2).
We learn from Proposition 1 that in simple cases, a standard one-dimensional regression estimator for E[Y |X = x] would suffice. On the other hand, we know from the backdoor adjustment formula in (6), that we should adjust with the variables X S . Therefore, it seems natural to use an additive regression approximation for m(x, x S ) as a simple starting point. If the assumptions of Proposition 1 hold, such an additive model fit would yield a consistent estimate for the component of the variable x: in fact, it is asymptotically as efficient as when using one-dimensional function estimation for E[Y |X = x] (Horowitz et al., 2006). If the assumptions of Proposition 1 would not hold, we can still view an additive model fitm add =μ +m add,j X (x) + j∈Sm add,j (x j ) as one of the simplest starting points to approximate the more complex function m(x, x S ). When integrating out with the empirical mean as in (8), we obtain the esti-mateÊ add [Y |do(X = x)] =m add,j X (x). As motivated above and backed up by simulations,m add,j X (x) is quite often already a reasonable estimator for In the presence of strong interactions between the variables, the additive approximation may drastically fail though. Thus, we want to implement marginal integration as follows: starting fromm add , we implement L 2 -Boosting with the nonparametric kernel estimator as in (7). More precisely, we compute residuals For simplicity, the residuals are fitted with a locally constant marginal integration estimator similar to the one mentioned in Section 2.1, and the resulting fit is denoted byĝ R1 (x, x S ). We add this new function fit to the previous fit and compute again residuals, and we then iterate the procedure b stop times. Denoting bym,ĝ and Y the n-dimensional vectors evaluated at the samples i = 1, . . . , n, we have: The final estimate for the total causal effect is obtained by marginally integrating over the variables X S with the empirical mean as in (8) The concrete implementation of the additive model fitting is according to the default from the R-package mgcv, using penalized thin plate splines and choosing the regularization parameter in the penalty by generalized cross-validation, see e.g. Wood (2006Wood ( , 2003. For the product kernel in (7), we choose K to be a Gaussian kernel and L to be a product of Gaussian kernels. The bandwidths h 1 and h 2 in the kernel estimator should be chosen "large", to yield an estimator with low variance but typically high bias. The iterations then reduce the bias. Once we have fixed h 1 and h 2 (and this choice is not very important as long as the bandwidths are "large"), the only regularization parameter is b stop . It is chosen by the following considerations: for each iteration we approximate the sum of the differences to the previous approximation on the set of intervention values I (typically the nine deciles, see Section 5), that is When it becomes reasonably "small", and this needs to be specified depending on the context, we stop the boosting procedure. Such an iterative boosting scheme has the advantage that it is more insensitive to the choice of b stop than the original estimator in (8) to the specification of the tuning parameters, and in addition, boosting adapts to some extent to different smoothness in different directions (variables). All these ideas are presented at various places in the boosting literature, particularly in Friedman (2001); Bühlmann and Yu (2003); Bühlmann and Hothorn (2007). In Section 4.2 we provide an example of a DAG with backdoor paths, where the additive approximation is incorrect and several boosting iterations are needed to account for interaction effects between the variables. In the following, we summarize the implementation of our method in Algorithm 1.
Fit an additive regression of Y versus X to obtainm add 3: returnm add 4: else 5: Fit an additive regression of Y versus X and the adjustment set variables X S to obtain m 1 =m add 6: Apply L 2 -boosting to capture deviations from an additive regression model: 8: (ii) Fit the residuals with the kernel estimator defined in Section 2.1 to obtainĝ R b 10: end for 12: return Do marginal integration: output 1
Knowledge of a superset of the DAG
It is known that a superset of the parental set pa(j X ) suffices for the backdoor adjustment in (3). To be precise, let where de(j X ) are the descendants of j X (in the true DAG D 0 ). For example, S(j X ) could be the parents of X in a superset of the true underlying DAG (a DAG with additional edges relative to the true DAG). We can then choose the adjustment set S in (8) as S(j X ) and Theorem 1 still holds true, assuming that the cardinality |S(j X )| ≤ M < ∞ is bounded. Thus, with the choice S = S(j X ), we can use marginal integration by marginalizing over the variables X S(j X ) .
A prime example where we are provided with a superset S(j X ) ⊇ pa(j X ) with S(j X ) ∩ de(j X ) = ∅ is when we know the order of the variables and can deduct an approximate superset of the parents from that. For example in a time series context, we might assume a Markovian property and estimate an upper bound of the Markov order, say a maximum time lag p max which we look back for determining the conditional distributions. A simple construction for a valid superset is then (when the variables are ordered with X j ≺ X k for j < k): where "≺" and p max denote the order relation among the variables and an upper bound of the lags to ensure that S(j X ) ⊇ pa(j X ).
Corollary 1. Assume the conditions of Theorem 1 for the variables Y, X and X S(j X ) with S(j X ) in (10) or S(j X ) as in (11) for ordered variables. Then, The statement is an immediate consequence of Theorem 1, as S(j X ) in (10) or in (11) satisfies the backdoor criterion relative to (X, Y ).
Fully robust S-mint and connection to double robustness
Theorem 1 establishes that S-mint is fully robust against model-misspecification for inferring E[Y |do(X = x)] or related quantities as mentioned in Remark 1. The existing framework of double robustness is related to this issue and we clarify here the connection.
We adopt for this section the more common notation used in the literature for double robustness. The outcome or response variable of interest is denoted as before by Y , the variable Z is the intervention variable, where Z = z was written above as do(X = z) and X denotes the (potential) confounder variables which were formalized before as X S . One then specifies a regression model for E[Y |Z, X] (appearing above in the backdoor adjustment in (6)) and a propensity score (Rosenbaum and Rubin, 1983) or inverse probability weighting model (IPW; Robins et al. (1994)): for a binary intervention variable where Z encodes "exposure" (Z = 1) and "control" (Z = 0), the model is a logistic model for requires that either the regression model or the propensity score model is correctly specified. If both of them are misspecified, the DR estimator is inconsistent. Double robustness of the augmented IPW approach has been proved by Scharfstein et al. (1999) and double robustness in general was further developed by many others, see e.g. Bang and Robins (2005).
Here, we gain full robustness by adopting a nonparametric modeling approach and without using an IPW or propensity score approach. As for DR estimators, we also assume that the potential confounders (in our terminology, the adjustment set S of the intervention variable X S ) are known. (Semi-)parametric efficiency issues do not play a role in our procedure since we use the nonparametric marginal integration approach: thus, under correct specification of the regression or propensity score model, the DR estimators might lead to more precise estimators while under misspecification of both models, DR procedures are inconsistent in contrast to S-mint. Thus, S-mint exhibits a much more general and "full" robustness against model-misspecification.
Path-based methods
We assume in the following until Section 3.5 that we know the true DAG and all true functions and error distributions in the general SEM (1). Thus, in contrast to Section 2, we have here also knowledge of the entire structure in form of the DAG D 0 (and not only a valid adjustment set S assumed for Theorem 1). This allows to infer E[Y |do(X = x)] in different ways than the generic S-mint regression from Section 2. The motivation to look at other methods is driven by potential gains in statistical accuracy when including the additional information of the functional form or of the entire DAG in the structural equation model. We will empirically analyze this issue in Section 5.
Entire path-based method from root nodes
Based on the true DAG, the variables can always be ordered such that Denote by j X and j Y the indices of the variables X and Y , respectively.
Step2 Based on Step1, recursively generate: Instead of an analytic expression for p(Y |do(X = x)) by integrating out over the other variables {X j k ; k = j X , j Y } we rather rely on simulation. We draw by B independent simulations of Steps1-2 above and we then approximate, for B large, Furthermore, the simulation technique allows to obtain the distribution of p(Y |do(X = x)) via e.g. density estimation or histogram approximation based on Y (1) , . . . , Y (B) .
The method has an implementation in Algorithm 2 which uses propagation of simulated random variables along directed paths in the DAG. The method exploits the entire paths in the DAG from the root nodes to node j Y corresponding to the random variable Y . Figure 1 provides an illustration.
Algorithm 2 Entire path-based algorithm for simulating the intervention distribution 1: If there is no directed path from j X to j Y , the interventional and observational quantities coincide: . 2: If there is a directed path from j X to j Y , proceed with steps 3-9. 3: Set X = X j X = x and delete all in-going arcs into X. 4: Find all directed paths from root nodes (including j X ) to j Y , and denote them by p 1 , . . . , pq. 5: for b = 1, . . . , B do 6: for every path, recursively simulate the corresponding random variables according to the order of the variables in the DAG: (i) Simulate the random variables of the root nodes of p 1 , . . . , pq; (ii) Simulate in each path p 1 , . . . , pq the random variables following the root nodes; proceed recursively, according to the order of the variables in the DAG.
(iii) Continue with the recursive simulation of random variables until Y is simulated.
7:
Store the simulated variable Y (b) . 8: end for 9: Use the simulated sample Y (1) , . . . , Y (B) to approximate the intervention distribution When having estimates of the true DAG, all true functions and error distributions in the additive structural equation model (13), we would use the procedure above based on these estimated quantities; for the error distributions, we either use the estimated variances in Gaussian distributions, or we rely on bootstrapping residuals from the structural equation model (typically with residuals centered around zero).
Partially path-based method with short-cuts
Mainly motivated by computational considerations (see also Section 3.3), a modification of the procedure in Algorithm 2 is valid. Instead of considering all paths from root nodes to j Y (corresponding to variable Y ), we only consider all paths from j X (corresponding to variable X) to j Y and simulate the random variables on these paths p 1 , . . . , p m . Obviously, in comparison to Algorithm 2, m ≤ q and every p k corresponds to a path p r for some r ∈ {1, . . . , q}.
Every path p k is of the form having length k . For recursively simulating the random variables on the paths p 1 , . . . , p we start with setting Then we recursively simulate the random variables corresponding to all the paths p 1 , . . . , p m according to the order of the variables in the DAG. For each of these random variables X j with j ∈ {p 1 , . . . , p m } and j = j X , we need the corresponding parental variables and error terms in where for every k ∈ pa(j) we set where the bootstrap resampling is with replacement from the entire data. The errors are simulated according to the error distribution. We summarize the procedure in Algorithm 3 and Figure 1 provides an illustration.
Algorithm 3 Partially path-based algorithm for simulating the intervention distribution 1: If there is no directed path from j X to j Y , the interventional and observational quantities coincide: . 2: If there is a directed path from j X to j Y , proceed with Steps3-9. 3: Set X = X j X = x and delete all in-going arcs into X. 4: Find all directed paths from j X to j Y , and denote them by p 1 , . . . , p m . 5: for b = 1, . . . , B do 6: for every path, recursively simulate the corresponding random variables according to the order of the variables in the DAG: (i) Simulate in each path p 1 , . . . , p m the random variables following the node j X ; proceed recursively as described in (12) according to the order of the variables in the DAG, ; (ii) Continue with the recursive simulation of random variables until Y is simulated.
Proposition 2. Consider the population case where the bootstrap resampling in (12) yields the correct distribution of the random variables X 1 , . . . , X p . Then, as B → ∞, the partially path-based Algorithm 3 yields the correct intervention distribution p(y|do(X = x)) and its expected value E[Y |do(X = x)].
Proof. The statement of Proposition 2 directly follows from the definition of the intervention distribution in a structural equation model.
The same comment applies here as in Section 3.1: when having estimates of the quantities in the additive structural equation model (13), we would use Algorithm 3 based on the plugged-in estimates. The computational benefit of using Algorithm 3 instead of Algorithm 2 is illustrated in Figure 7.
Illustration of Algorithm 1. X is set to x, the roots R 1 , R 2 and all paths from the root nodes to Y are enumerated (here: p 1 , p 2 , p 3 ). The interventional distribution at node Y is obtained by propagating samples along the three paths. (c) Illustration of Algorithm 2. X is set to x and all directed paths from X to Y are labeled (here: p 1 ). In order to obtain the interventional distribution at node Y , samples are propagated along the path p 1 and bootstrap resampled X * k and X * l are used according to (12). (d) Illustration of the S-mint method with adjustment set S = pa(j X ): it only uses information about Y, X and the parents of X (here: Pa 1 , Pa 2 ).
Degree of localness
We can classify the different methods according to the degree of which the entire or only a small (local) fraction of the DAG is used. Algorithm 2 is a rather global procedure as it uses entire paths from root nodes to j Y . Only when j Y is close to the relevant root nodes, the method does involve a smaller aspect of the DAG. Algorithm 3 is of semi-local nature as it does not require to consider paths going from root nodes to j Y : it only considers paths from j X to j Y and all parental variables along these paths. The S-mint method based on marginal integration described in Section 2 and Theorem 1, is of very local character as it only requires the knowledge of Y, X and the parental set pa(j X ) (or a superset thereof) but no further information about paths from j X to j Y .
In the presence of estimation errors, a local method might be more "reliable" as only a smaller fraction of the DAG needs to be approximately correct; global methods, in contrast, require that entire paths in the DAG are approximately correct. The local versus global issue is illustrated qualitatively in Figure 1, and empirical results about statistical accuracy of the various methods are given in Section 5.
Estimation of DAG, edge functions and error distributions
With observational data, in general, it is impossible to infer the true underlying DAG D 0 in the structural equation model (5), or its parental sets, even as sample size tends to infinity. One can only estimate the Markov equivalence class of the true DAG, assuming faithfulness of the data-generating distribution, see Spirtes et al. (2000); Pearl (2000); Chickering (2002); Kalisch and Bühlmann (2007); van de Geer and Bühlmann (2013); Bühlmann (2013, cf.). The latter three references focus on the high-dimensional Gaussian scenario with the number of random variables p n but assuming a sparsity condition in terms of the maximal degree of the skeleton of the DAG D 0 . The edge functions and error variances can then be estimated for every DAG member in the Markov equivalence class by pursuing regression of a variable versus its parents.
However, there are interesting exceptions regarding identifiability of the DAG from the observational distribution. For nonlinear structural equation models with additive error terms, it is possible to infer the true underlying DAG from infinitely many observational data (Hoyer et al., 2009;. Various methods have been proposed to infer the true underlying DAG D 0 and its corresponding functions f 0 j (·) and error distributions of the ε j 's: see for example As an example of a model with identifiable structure (DAG D 0 ) we can specialize (5) to an additive structural equation model of the form where ε 1 , . . . , ε p are independent with ε j ∼ N (0, (σ 0 j ) 2 ), and the true underlying DAG is denoted by D 0 . This model is used for all numerical comparisons of the S-mint procedure and the path-based algorithms in Section 5. Estimation of the unknown quantities D 0 , f 0 jk and error variances (σ 0 j ) 2 can be done with the "CAM" method . It is consistent, even in the highdimensional scenario with p n but assuming a sparse underlying true DAG, as shown in the mentioned reference. We will use the "CAM" method for the empirical results in Section 5.4 in connection with the two-stage est S-mint in the following section.
Two-stage procedure: est S-mint
If the order of the variables or (a superset of) the parental set is unknown, we have to estimate it from observational data; this leads to the following twostage procedure described here for the case where the parental set pa(j X ) is identifiable: Stage 1 Estimate S(j X ) described in Section 2.3, a superset of the parental set, from observational data. Stage 2 Based on the estimateŜ(j X ), run S-mint regression with S = S(j X ).
Even if in Stage 1 one would also obtain estimates of functions in some specified SEM besides an estimate of S(j X ), we would not use the estimated functions in Stage 2. We present empirical results for the est S-mint procedure in connection with the "CAM" method for Stage 1 for estimating a valid adjustment set S(j X ) in Section 5.4.
If the parental set pa(j X ) is not identifiable (see Section 3.4), one could apply Stage 1 to obtain a set {Ŝ(j X ) (1) , . . . ,Ŝ(j X ) (cj ) } such that the parental sets from each Markov-equivalent DAG would be contained in at least one of theŜ(j X ) (k) for some k. Stage 2 would then be performed for all estimates {Ŝ(j X ) (1) , . . . ,Ŝ(j X ) (cj ) } and one could then derive bounds of the quantity E[Y |do(X = x)] in the spirit of the approach from Maathuis et al. (2009).
We give some intuition why the two stage est S-mint is often leading to better and more reliable results than (at least some) other methods which rely on path-based estimation in Section 5.5.
Empirical results: non-additive structural equation models
In this Section we provide simple proof-of-concept examples for the generality of the proposed S-mint estimation method (Algorithm 1). In particular, the robustness of S-mint is experimentally validated for models where the structural equation model is not additive but given in its general form (5). In Section 4.1 we empirically show that the path-based methods based on the wrong additive model assumption in (13) may fail even in the absence of backdoor paths where the S-mint method boils down to estimation of an additive model. In Section 4.2 we add backdoor paths to the graph and a strong interaction term to the corresponding structural equation model. We then empirically show that S-mint manages to approximate the true causal effect, whereas fitting only an additive regression fails. Section 4.3 contains an example that demonstrates a good performance of S-mint even in the presence of non-additive noise in the structural equation model. Finally, Section 4.4 empirically illustrates issues with the fixed choice of the bandwidths in the product kernel in (7) in some cases.
Causal effects in the absence of backdoor paths
First let us illustrate the sensitivity of the path-based methods with respect to model specification, using a simple example of a 4-node graph with no backdoor paths between X 1 = X and Y (see Figure 2). We consider a corresponding (non-additive) structural equation model of the form where ε j ∼ N (0, σ 2 j ) with σ 1 = σ 2 = 0.7 and σ 3 = σ 4 = 0.2. We generate n samples from this model. From Proposition 1 we know that for j ∈ {1, 2, 3}, fitting an additive regression of Y versus X j and X pa(j) suffices to obtain the causal effect E[Y |do(X j = x)], that is, all causal effects can be readily estimated with an additive model. Our goal is to infer E[Y |do(X 1 = x)], based on n = 500 and n = 10 000 samples of the joint distribution of the 4 nodes. The results are displayed in Figure 2.
qqq qqqq qqqqqq qq qq qqqqqqq qq q q q q q q q q qqqqqqq q q q q q (14), with S = S(j X = 1) = ∅, based on one representative sample each for sample sizes n = 500 (top) and n = 10 000 (bottom). S-mint regression is consistent while the entire path-based method with a misspecified additive SEM (Algorithm 2) is not. The relative squared errors (over the 51 points x) are 0.013 for S-mint regression and 6.239 for the entire path-based method, both for n = 10000.
We consider the entire path-based Algorithm 2 (and Algorithm 3 as well, not shown) assuming an additive structural equation model as in (13). We impressively see that this approach is exposed to model misspecification while S-mint (in this case simply fitting of an additive model, i.e., b stop = 1 with the number of additional boosting iterations equaling zero) is not and leads to reliable and correct results. We included two settings; n = 500 to be consistent with the settings in the numerical study from Section 5 and n = 10000 to demonstrate that the failure of the path-based methods is not a small sample size but an inconsistency phenomenon.
Causal effects in the presence of backdoor paths
We now consider a slight (but crucial) modification of the above DAG that has been proposed by Linbo Wang and Mathias Drton through private communication. We consider the 4-node graph from Section 4.1 with additional edges X 1 → Y and X 2 → Y and corresponding structural equation model where ε j ∼ N (0, σ 2 j ) with σ 1 = σ 2 = 0.7 and σ 3 = σ 4 = 0.2. Note that this modification introduces two backdoor paths from X 3 to Y . The goal is to estimate the causal effect E[Y |do(X 3 = x)] using the S-mint estimation procedure introduced in Algorithm 1 with different numbers of boosting iterations. In Figure 3 one clearly sees that the additive approximation (with no additional boosting iterations) fails to approximate the total causal effect. It is not able to capture the full interaction term X 1 · X 2 · X 3 . However, adding boosting iterations significantly improves the approximation of the true causal effect even for the small sample size n = 500.
Causal effects in the presence of non-additive noise
Theorem 1 does not put any explicit restrictions on the noise structure in the structural equation model. In particular, S-mint also works well in the case of non-additive noise. As an example, let us consider the causal graph and SEM from Section 4.2, but we replace the structural equation corresponding to Y in (15) with The goal is again to estimate the causal effect E[Y |do(X 3 = x)] based on n = 500 observed samples of the joint distribution. Figure 4 shows that S-mint yields a close approximation to the true causal effect. (16) exhibiting nonadditive noise in the structural equation model, with S-mint regression for additive model fit (starting value) and various boosting iterations (left). Absolute differences between consecutive boosting iterations as in (9) (upper right) and integrated squared error for approximating the true effect as a function of boosting iterations (lower right). The adjustment set is chosen as the parental set of X 3 , that is S(j X = 3) = {1, 2}. The results are based on one representative sample of size n = 500.
Choice of the bandwidth
Theorem 1 provides an asymptotic result but does not specify how to choose the bandwidths h 1 and h 2 in the finite sample case. In particular, the same fixed choice of h 2 for all variables in the adjustment set S can be suboptimal in some situations. As an example let us consider the graph and structural equations from Section 4.2 where we replace one equation in (15) by The goal is to approximate the causal effect E[Y |do(X 3 = x)] based on n = 500 samples of the joint distribution. Inspecting the scatterplots of Y versus X 1 , X 2 and X 3 (see Figure 5) suggests that the bandwidth h (1) 2 corresponding to X 1 should be larger than the bandwidth h (2) 2 corresponding to X 2 . Figure 6 de- Scatterplots of the data from model (17) of Y versus X 1 , X 2 and X 3 . They reveal a difference in wigglyness.
picts the corresponding approximated causal effects using the S-mint method for a fixed bandwidth h 2 = (h 2 ) = (0.4, 0.4) and for a variable bandwidth 2 ) = (0.8, 0.4) respectively. Clearly, the approximation with the variable bandwidth outperforms the approximation with the fixed bandwidth. Adaptive bandwidths choice methods as proposed by Polzehl and Spokoiny (2000) might be suitable, at the price of a more complicated and hence more variable estimation scheme.
Empirical results: additive structural equation models
The goal of the numerical experiments in this section is to quantify the estimation accuracy of the total causal effect E[Y |do(X = x)] for two variables X, Y ∈ {X 1 , ..., X p } such that Y is a descendant of X (if Y is an ancestor of X, then the interventional expectation corresponds to the observational expectation E[Y ]). We consider in this section only additive structural equation models as in (13). This allows for a comparison of the S-mint method and the path-based methods.
For S-mint regression, we use the implementation described in Section 2.2. The kernel functions K and L in the S-mint procedure are chosen to be a Gaussian kernel with bandwidth h 1 and a product of Gaussian kernels with bandwidth h 2 respectively. For simplicity, in the style of Fan et al. (1998), we choose h 1 and h 2 as 0.5 times the empirical standard deviation of the respective covariables in all of our simulations in this section. We use the following two criteria for b stop , that is, as an automated stopping criterion for the boosting iterations: 1. Stop if an iteration changes the approximation by less than 1%. That is, the integrated difference (9) to the previous approximation is less than 0.01. 2. Stop if the integrated difference between two consecutive approximations is less than 5% of the initial integrated difference.
When using the path-based methods from Section 3, we estimate the functions f 0 j by additive functions using the R-package mgcv with default values (and thus using the knowledge of the form of the nonlinear functions in the SEM).
We test the performance of four different methods: S-mint with parental sets (Algorithm 1) with the stopping of boosting iterations as described above, additive regression with parental sets (first step of S-mint, without additional boosting iterations), entire path-based method from root nodes (Algorithm 2) and partially path-based method with short-cuts (Algorithm 3). The reference effect E[Y |do(X = x)] is computed using Algorithm 2 with known (true) functions f 0 j,k and error variances (σ 0 j ) 2 based on 5n samples. Since in a nonlinear structural equation model (in contrast to a linear structural equation model) E[Y |do(X = x)] is a nonlinear function of the intervention value x, we compute the interventional expectation for several values x: typically, for the nine deciles d 1 (X), ..., d 9 (X) of X. To compare the estimation accuracy of the three methods on DAG D, we compute a relative squared error e(D) over all considered pairs (X, Y ) (for details see below), denoted by L, and over all intervention values d 1 (X), ..., d 9 (X) as Typically, we repeat every experiment on N = 50 or N = 100 random DAGs (described in Section 5.1) and record the relative error e(D) of all methods for each repetition.
Data simulation
To simulate data we first fix a causal order π 0 of the variables, that is X π 0 (1) ≺ X π 0 (2) ≺ · · · ≺ X π 0 (p) and include each of the p 2 possible directed edges, independently of each other, with probability p c . In the sparse setting we typically choose p c = 2 p−1 which yields an expected number of p edges in the resulting DAG. Based on the causal structure of the graph we then build the structural equation model. We simulate from the additive structural equation model (13), where every edge k → j in the DAG is associated with a nonlinear function f 0 j,k in the structural equation model. We use two function types: 1. edge functions f 0 j,k drawn from a Gaussian process with a Gaussian kernel with bandwidth one 2. sigmoid-type edge functions of the form f 0 j,k (x) = a · b·(x+c) 1+|b·(x+c)| with a ∼ Exp(4) + 1, b ∼ Unif([−2, −0.5] ∪ [0.5, 2]) and c ∼ Unif([−2, 2]).
All variables with empty parental set (root nodes in the DAG) follow a Gaussian distribution with mean zero and standard deviation which is uniformly distributed in the interval [1, √ 2]. To all remaining variables we add Gaussian noise with standard deviation uniformly distributed in [1/5, √ 2/5]. Note that both simulation settings correspond to the ones used by Bühlmann et al. (2014).
Estimation of causal effects with known graphs
In this section we compare the different methods in terms of estimation accuracy and CPU time consumption for known underlying DAGs D 0 . To that end we generate random DAGs with p = 10 variables and simulate n = 500 samples of the joint distribution applying the simulation procedure introduced in Section 5.1. We then select all index pairs (k, j) such that there exists a directed path from X k to X j and estimate the causal effect E[X j |do(X k )] for all k, j on the nine deciles of X k .
The experiment is done for two different levels of sparsity, a sparse graph with an expected number of p edges and a non-sparse graph with an expected number of 4p edges. We record the relative squared error (18) and the CPU time consumption, both averaged over all index pairs, for N = 100 (N = 20 in the dense setting, respectively) different DAGs D 0 . The results are displayed in Figure 7 for the sigmoid-type edge functions and in Figure 8 for the Gaussian process-type edge functions.
The method based on the entire paths (Algorithm 2) yields the smallest errors followed by the path-based methods with short-cuts (Algorithm 3). The S-mint and additive regression exhibit a slightly worse performance. This finding can be explained by the fact that the path-based methods benefit from the full (and correct) structural information of the DAG whereas the S-mint and additive regression methods only use local information (cf. Section 3.3). For the monotone sigmoid-type function class, additive regression seems to be a very good approximation to the true causal effect even in dense settings. For both settings we observe that the boosting iterations in S-mint do not improve the additive approximation substantially. In terms of CPU time consumption, S-mint and additive regression outperform the path-based methods. Additive regression is particularly fast as it only requires the fit of one nonparametric additive regression of X j versus X k and X pa(k) whereas the path-based methods each require one nonparametric additive model fit for every node on all the traversed paths. As the set of paths in the partially path-based method is a subset of the one in the entire path-based method (cf. Section 3.2 and Figure 1), the partially path-based method needs less model fits which explains the reduction of time consumption. In particular, both S-mint and additive regression are computationally feasible for computing E[X j |do(X k )] for all pairs (k, j), even when p is large and in the thousands assuming that the cardinality of the corresponding adjustment sets are reasonably small.
Estimation of causal effects on perturbed graphs
In the previous section we demonstrated that the two path-based methods exhibit a better performance than S-mint and the additive regression approximation if causal effects are estimated based on the underlying true DAG D 0 . We will now focus on the more realistic situation in which we are only provided with a partially correct DAGD. We model this by constructing a set of modified DAGs {D hr } r∈K with pre-specified (fixed) structural Hamming distances {h r } r∈K to the true DAG D 0 , where K = {1, 2, . . . , 6} and the corresponding {h r } r∈K are described in Figures 9 and 10. To do so, we use the following rule: starting from D 0 with p = 50 nodes, for each r ∈ K, we randomly remove and add hr 2 edges each to obtainD hr . The structural Hamming distance between D 0 and the perturbed graphD hr is then equal to h r , and a percentage of 1 − hr 2|E| edges inD hr are still correct, where |E| denotes the number of edges in the DAG D 0 . Note that this modification may change the order of the variables (especially for large values of h r ).
We randomly choose 20 = |L| index pairs (k, j) such that there exists a directed path from X k to X j in D 0 , but now consider the problem of estimating the total causal effect E[X j |do(X k )] based on the perturbed graphD hr for the adjustment sets or the paths, respectively (and based on sample size n = 500 as in Section 5.2). For every r ∈ K, this is repeated N = 100 times and in each repetition, we record the relative squared error e(D) in (18). As before, we distinguish between a sparse graph with an expected number of 50 edges and a non-sparse graph with an expected number of 200 edges and we use both simulation settings described in Section 5.1 for generating the edge functions f 0 . The results are shown in Figures 9 and 10.
For both, the sparse and non-sparse settings, one observes that the larger the structural Hamming distance (or equivalently, the smaller the percentage of correctly specified edges in D 0 ), the better is the performance of S-mint and additive regression in comparison with the path-based methods. That is, both methods are substantially more robust with respect to possible misspecifications of edges in the graph. This may be explained by the different degrees of localness (cf. Section 3.3) of the respective methods. For the two local methods we can hope to have approximately correct information in the parental set of X k even if the modified DAG is far away from the true DAG D 0 in terms of the structural Hamming distance. For the path-based methods however, randomly removing edges may break one or several of the traversed paths which results in causal information being partially or fully lost. This effect is most evident in the two sparse settings. A similar behavior is also observed in Figure 11.
Note that except for the true DAG D 0 , the performance of the partially path-based method is at least as good as for the entire path-based method. The shortcut introduced in Algorithm 3 does not only seem to yield computational savings but also improve (relative to the full path-based Algorithm 2) statistical estimation accuracy of causal effects in incorrect DAGs. Again, a possible explanation for this observation is that the partially path-based method acts more locally and thus is less affected by edge perturbations.
Estimation of causal effects in estimated graphs
We now turn our attention to the case where the goal is to compute causal effects on a DAGD that has been estimated by a structure learning algorithm (while still relying on a correct model specification). In conjunction with S-mint regression, this is then the method est S-mint described in Section 3.5. We generate N = 50 random DAGs with p = 20 nodes for different numbers n of observational data, which are simulated according to the procedure in Section 5.1.
Using the knowledge that the structural equation model is additive, we apply the recently proposed CAM-algorithm for estimation of the true underlying DAG D 0 (which is identifiable from the observational distribution). The CAM-algorithm involves the following three steps: 1. Preliminary neighborhood selection to restrict the number of potential parents per node (set to a maximum of 10 by default); 2. Estimation of the correct order by greedy search (we use 6 basis functions per parent to fit the generalized additive model); 3. Optional: Pruning of the DAG by feature selection to keep only the significant edges (on level α = 0.001 by default). After having estimated a DAGD with the above procedure, we randomly select 10 = |L| index pairs (k, j) such that there exists a directed path from X k to X j in the true DAG D 0 and approximate the total causal effect E[X j |do(X k )] based on the estimated graphD. Figure 11 displays the relative squared errors as defined in (18).
All four methods show a similar performance with respect to relative squared error on the DAGs that are obtained applying the CAM-algorithm without feature selection. These DAGs mainly represent the causal order of the variables but otherwise are densely connected. An incorrectly specified order of the variables (e.g. for small sample sizes n) seems to comparably affect the S-mint and additive regression with parental sets and the path-based methods. If the sample size increases, the estimated graphD is closer to the true graph D 0 which improves the estimation accuracy of causal effects for all the four methods.
The two path-based methods approximate the causal effects more accurately on the DAGs that are obtained without feature selection, that is, pruning the DAG does not seem to be advantageous for the estimation accuracy of causal effects, at least for a small number of observations. However, the pruning step yields vast computational savings for the two path-based methods as demonstrated in Figure 12. The S-mint regression is very fast in both settings and pruning the DAG before estimating the causal effects only has a minor effect on the time consumption and estimation accuracy. (18), for different numbers of observations (n), computed on graphs that have been estimated using the CAM-algorithm . The algorithm has been applied without the pruning step (left) and with the pruning step (right). We use the estimated parental sets as adjustment sets and the number of variables is p = 20. The S-mint regression corresponds to est S-mint as described in Section 3.5. Sigmoid-type additive structural equation models. CPU time performance for n = 500 for N = 50 graphs of p = 20 variables that have been estimated using the CAM-algorithm with and without pruning step. Pruning the DAG yields vast computational savings for the two path-based methods. S-mint and additive regression are barely affected by the pruning step and are considerably faster than the two path-based methods in both scenarios. 5.5. Summary of the empirical results, and the advantage of the proposed two-stage est S-mint method With respect to statistical accuracy, measured with the relative squared error as in (18), we find that S-mint and additive regression are substantially more robust against incorrectness of the true underlying DAG (or against a wrong order of the variables) and against model misspecification, in comparison to the alternative path-based methods. The latter robustness of S-mint is rigorously backed-up by our presented theory in Theorem 1 and Corollary 1 whereas the former seems to be due to the higher degree of localness as described in Section 3.3. Therefore, the proposed two-stage est S-mint (Section 3.5) where we first estimate the order of the variables or the structure of the DAG (or the Markov equivalence class of DAGs) and subsequently perform S-mint is expected in general to lead to reasonably accurate results (and empirically quantified above for some settings). Only when the DAG is perfectly known and the model correctly specified (here by an additive structural equation model), which seems a rather unrealistic assumption for practical applications, the path-based methods were found to have a slight advantage. Thus, we recover here a typical robustness phenomenon against model misspecification of our nonparametric and more "model-free" S-mint regression procedure. Regarding computational efficiency, S-mint and in particular also the additive regression approximation are massively faster than the path-based procedures making them feasible for larger scales where the number of variables is in the thousands.
Real data application
In this section we want to provide two examples for the application of our methodology to real data. We use gene expression data from the isoprenoid biosynthesis in Arabidopsis thaliana (Wille et al., 2004). The data consists of n = 118 gene expression measurements from p = 39 genes. In the original work the authors try to infer connections between the individual genes in the network using Gaussian graphical modeling. Our goal is to find the strongest causal connections between the individual genes. We do not standardize the original data but adjust the bandwidths in S-mint by scaling with the standard deviations of the corresponding variables.
Estimation and error control for causal connections between and within the pathways
We first turn our attention to the whole isoprenoid biosynthesis data set and want to find the causal effects within and between the different pathways, with an error control for false positive selections. To be able to compute the causal effects we have to estimate a causal network. In order to do that we use the CAM-algorithm . We estimate a DAG using CAM with the default settings. We then apply the S-mint procedure with parental sets obtained from the estimated DAG (which corresponds to the est S-mint procedure from Section 3.5) to rank the total causal effects according to their strength. We define the relative causal strength CS rel k→j of an intervention X j |do(X k ) as a sum of relative distances of observational and interventional expectation for different intervention values divided by the range of the intervention values, i.e.
where we choose d 1 (X k ), ..., d 9 (X k ) to be the nine deciles of X k and we denote their range by R k (d) = d 9 (X k ) − d 1 (X k ).
To control the number of false positives (i.e. falsely selected strong causal effects) we use stability selection (Meinshausen and Bühlmann, 2010) which provides (conservative) error control under a so-called (and uncheckable) exchangeability condition. We randomly select 100 subsamples of size n/2 = 59 and repeat the procedure above 100 times. For each run, we record the indices of the top 30 ranked causal strengths. At the end we keep all index pairs that have been selected at least 66 times in the 100 runs as this leads to an expected number of falsely selected edges (false positives) which is less or equal to 2 (Meinshausen and Bühlmann, 2010). The graphical representation of the network in Figure 13 is based on Wille et al. (2004). The dotted arcs represent the underlying metabolic network (known from biology), the six red solid arcs correspond to the stable index pairs found by est S-mint with stability selection. None of the stable edges are opposite to the causal direction of the metabolic network. In particular, there seems to be a strong total causal effect between GGPPS variables in the MEP pathway, MVA pathway and mitochondrion. Note that in this section we heavily rely on model assumption (13) as the CAMalgorithm for estimating a DAG assumes additivity of the parents. Therefore we can not fully exploit the advantage of the S-mint method that it works for arbitrary non-additive models (5) (but we would hope to be somewhat less sensitive to model misspecification than with path-based methods, see for example Figures 9 and 10).
Estimation and error control of strong causal connections within the MEP pathway
We now want to present a possible way of exploiting the very general model assumptions of S-mint. If the underlying order and an approximate graph structure are known a priori, we can use this information to proceed with S-mint using the order information as described in Corollary 1. This relieves us from any model assumptions on the functional connections between two variables (e.g. linearity, additivity, etc.).
To give an example, let us focus on the genes in the MEP pathway (black box in Figure 13). The goal is to find the strongest total causal effects within this pathway. The metabolic network (dotted arcs) is providing us with an order of the variables which we use for S-mint regression as follows: we choose the adjustment set S(j X ) in (11) by going three levels back (p max = 3) in the causal order (to achieve a reasonably sized set), for example, the adjustment set for CMK is DXPS1, DXPS2, DXPS3, DXR, MCT, whereas the adjustment set for GPPS is HDS, HDR, IPPI1. We cannot use the full set of all ancestors because there are only n/2 = 59 data points to fit the nonparametric additive regression and marginal integration, as we again use stability selection based on subsampling for controlling false positive selections as described in the previous section. For each among the 100 subsampling runs we record the top 10 ranked index pairs and keep the ones that are selected at least 65 times out of 100 repetitions. This results in an expected number of false positives being less than 1 (Meinshausen and Bühlmann, 2010). The stable edges are shown in Figure 14. One of the four edges corresponds to an edge in the metabolic pathway. We find that the upper part of the pathway contains the strongest total causal effects and it might be an interesting target for intervention experiments.
Conclusions
We considered the problem of inferring expected values of intervention distributions from observational data. A first main result (Theorem 1 and Corollary 1) says that if we know the local parental variables or a superset thereof (e.g., from the order of the variables), there is no need to base estimation and computations on a causal graph as we can directly infer the expected values of single-intervention distributions via marginal integration: we call the procedure S-mint. This result holds for any nonlinear and non-additive structural equation model apart from some mild smoothness and regularity conditions. Thus, from another point of view, S-mint estimation of expected values of single intervention distributions is robust against model misspecification of the functional form of the structural equations.
We complement the robustness view-point by empirical results indicating that S-mint also works reasonably well when the DAG-or order-structure is misspecified to a certain extent, as will be the case when we estimate these quantities from data; in fact, S-mint regression is substantially more robust than methods which follow all directed paths in the DAG to infer causal effects. This suggests that the two-stage est S-mint procedure is most reliable for causal inference from observational data: first estimate the DAG-or order-structure (or equivalence classes thereof) and second, subsequently pursue S-mint regression.
In addition, such a procedure is computationally much faster than methods which exploit directed paths in (estimated) DAGs. | 17,861.2 | 2014-05-08T00:00:00.000 | [
"Mathematics"
] |
Edinburgh Research Explorer Improved PCR based methods for detecting C9orf72 hexanucleotide repeat expansions
Due to the GC-rich, repetitive nature of C9orf72 hexanucleotide repeat expansions, PCR based detection methods are challenging. Several limitations of PCR have been reported and overcoming these could help to de fi ne the pathogenic range. There is also a need to develop improved repeat-primed PCR assays which allow detection even in the presence of genomic variation around the repeat region. We have optimised PCR conditions for the C9orf72 hexanucleotide repeat expansion, using betaine as a co-solvent and speci fi c cycling conditions, including slow ramping and a high denaturation temperature. We have developed a fl anking assay, and repeat-primed PCR assays for both 3 0 and 5 0 ends of the repeat expansion, which when used together provide a robust strategy for detecting the presence or absence of expansions greater than ~100 repeats, even in the presence of genomic variability at the 3 0 end of the repeat. Using our assays, we have detected repeat expansions in 47/442 Scottish ALS patients. Further-more, we recommend the combined use of these assays in a clinical diagnostic setting. © 2016 Published by Elsevier
The threshold size range of pathogenic alleles has not been well defined, and often relies on the technical cut-off of detection by PCR based assays (30e50 repeats) [2,4]. There is one report of a stable 70 repeat allele in an unaffected individual expanding in his offspring, but further studies are required to determine whether anticipation is associated with this repeat expansion [5]. To ascertain the minimal pathogenic repeat size, it is necessary to detect and accurately measure repeat sizes in small expansion carriers.
Historically, Southern blotting has been regarded as the gold standard method for detecting and sizing large repeat expansions such as in Fragile X syndrome. However, improvements in PCR based methods, particularly repeat-primed (RP-) PCR [6], has meant that clinical diagnosis can now be made using PCR methods alone. RP-PCR uses a locus-specific flanking primer along with a paired repeat primer that amplifies from multiple sites within the repeat, generating a characteristic ladder of fragments after capillary electrophoresis. In C9orf72, somatic mosaicism for repeat length in blood samples has been reported, and this can make accurate interpretation of Southern blots challenging, as well as making it difficult to predict any genotype-phenotype correlations with varying repeat size [1,3,7,8]. For this reason, developing reliable and robust RP-PCR methods is important, and others agree that Southern blot results should be interpreted in conjunction with RP-PCR [3].
Within both research and diagnostic settings, it is desirable to have high-throughput, rapid PCR based tests which are highly accurate and do not require large amounts of input DNA. The challenges of PCR amplification of the 100% GC rich C9orf72 HRE have been highlighted by a blinded international study which showed a wide variability in results obtained by different research laboratories using PCR methods [9]. Furthermore, the presence of variable deletions and insertions at the 3 0 end of the HRE [10], can adversely affect the reliability of PCR assays targeting this region [11].
There are various ways in which PCR can be enhanced such as the addition of co-solvents such as dimethyl sulfoxide and betaine, modified Taq polymerase and alteration of cycling conditions [12]. Heat-pulse extension (HPE) PCR has been reported to successfully allow amplification of repetitive GC-rich sequences similar to C9orf72 HRE, and so in this study we used these cycling conditions as a starting point to then optimise for these amplicons [12].
The objectives of this study were to develop a conventional flanking PCR assay which could amplify repeat alleles beyond the 50e70 limit reported in the literature, and to optimise RP-PCR assays for both ends of the repeat to ascertain whether there were greater than 100 repeats present. We also wanted to overcome the issues of the Renton et al. assay where the expansion is not detected by RP-PCR in cases with genomic variability adjacent to the HRE [11]. These assays were then used to screen for C9orf72 HRE in ALS patients from the Scottish population.
Patients and DNA samples
442 consecutive DNA samples obtained from patients with ALS who donated blood for research to the Scottish Regenerative Neurology Tissue Bank, and were phenotyped as part of the Scottish Motor Neurone Disease (MND) Register (between 1989 and 2015) were analysed. The diagnostic criteria used by the Scottish MND Register were the Modified World Federation of Neurology (1989e1994) or 'El Escorial' (1995 onwards) [13,14]. Clinical diagnostic samples received to the South East Scotland Genetics Service for C9orf72 testing from 2013 to 2016 were also used for assay development. In addition, positive control DNA samples derived from lymphoblast cell lines were obtained from Coriell Cell Repositories. The Institute of Neurology (UCL, Queen Square, London) shared positive control DNA derived from blood, from two short expansion (60e120 repeats) carriers.
Ethics
Ethical approval for research analysis of the Scottish Regenerative Neurology Tissue Bank samples affiliated to the Scottish MND register was obtained from the East of Scotland Research Ethics Service. NHS clinical diagnostic samples were consented for assay development.
Molecular testing
DNA was extracted from whole blood samples by phenolchloroform, manual salting out, the Nucleon BACC3 genomic DNA kit (Tepnel Life Sciences), or Chemagic DNA blood kit (Perkin Elmer). PCR primers are listed in Table 1. PCR amplification was carried out on a Veriti ® thermal cycler (Life Technologies). Cycling conditions are shown in Table 2.
PCR products were separated by capillary electrophoresis using an ABI 3130xL with a 50 cm array (Life Technologies) with either Genescan™ LIZ600 or LIZ1200 size standard (Life Technologies). Data was analysed using GeneMarker ® software v2.4.0 (Soft Genetics). Alternatively, PCR products were separated on 0.8% Ultra-Pure agarose (ThermoFisher Scientific) gels in TBE buffer with 100 bp DNA ladder (Promega) and 1 kb DNA extension ladder (Invitrogen).
For Sanger sequencing, either flanking PCR or an alternative 3 0 RP-PCR was used (Table 1). PCR products were purified using Agencourt Ampure XP (Beckman Coulter), as per the manufacturer's instructions, using a Biomek ® NX robot (Beckman Coulter). Sequencing was then performed using R6 primer and BigDye ® Terminator v3.1 (Life Technologies). Agencourt CleanSeq (Beckman Coulter) was used, according to the manufacturer's instructions, to clean-up sequencing products prior to capillary electrophoresis on an ABI 3130xL (Life Technologies). Data was analysed using Mutation Surveyor ® software v4.0.8 (Soft Genetics).
C9orf72 HRE frequency in the Scottish ALS population
We tested 442 archival DNA samples from the Scottish Regenerative Neurology Tissue Bank, linked to the Scottish MND Register, collected from 1989 to 2015, using flanking PCR to assess the sizes of normal alleles. 157 cases which gave a homozygous result on this assay were then tested using both 3 0 RP-PCR and 5 0 RP-PCR, which led to detection of C9orf72 expansions in 47 patients (10.6%), and gave one equivocal result which could not be resolved due to insufficient DNA. The repeat sizes that were obtained are shown in Fig. 1, which shows a similar distribution to the UK population [3].
Optimal conditions for flanking PCR
We developed a PCR assay using primers flanking the C9orf72 HRE and applied the HPE PCR conditions developed for Fragile X syndrome [12]. HPE PCR involves multiple heat pulses during the extension phase of the cycling protocol to temporarily destabilize GC rich structures which may otherwise lead to replication stalling [12]. These conditions permitted superior amplification to that achieved with Qiagen Multiplex PCR kit or Roche Fast Start High Fidelity PCR system with standard cycling conditions (data not shown). We then varied cycling conditions to determine the annealing temperature, and whether high denaturation, slow ramping or heat-pulse extension were required, and also the optimal extension time. We found that the slow ramp from annealing to extension phase and high denaturation temperature were the most important features, and in this case the heat pulses during extension were of no benefit (data not shown). The optimised conditions gave relatively balanced amplification of normal alleles, as highlighted in the series of samples with alleles ranging between 2 and 26 repeats (Fig. 2aec). The Institute of Neurology, UCL, Queen Square, London sent us two samples with 'short' expansions. The first was estimated as having 60 repeats, with mosaicism for a large expansion (James Polke, personal communication), and another with 90 repeats in blood estimated by Southern blotting [15]. These alleles had not been amplified using existing PCR methods by ourselves or the Institute of Neurology (data not shown). Using our method, we could detect alleles of approximately 70 and 80 repeats, and revealed a high level of mosaicism in both cases (Fig. 2def). The largest repeat size we detected in blood was~120 repeats, although we did note that a large smear was present in a number of samples with expansions present (data not shown). To determine the upper size range of detection, we tested lymphoblast cell line DNA from the Coriell Cell Repository which was positive for C9orf72 HRE by RP-PCR. This revealed material up to 5.7 kb, corresponding with approximately 900 repeats to have been amplified (Fig. 2f). There was amplification of expanded material in 4 out of 7 lines tested, and we presume that the other lines contained expansions which were beyond the size limit of detection by this method. This is supported by previously published Southern blotting results for ND10966, ND11836 and ND14442 [16].
To calibrate our sizing assay, we sequenced 14 patient samples with normal sized alleles to correlate the fragment size to repeat length. However, we cannot exclude variation in flanking sequences affecting the reported allele size, as has been reported by others [9].
Optimal conditions for RP-PCR
We designed primers for RP-PCR assays from both 3 0 and 5 0 ends of the HRE. We compared different PCR cycling conditions and found that the optimal conditions were the same as for flanking PCR, but with annealing at 62 C.
For the 5 0 RP-PCR and 3 0 RP-PCR assays, the maximum length of the amplicons which were obtained correlated with 100 and 160 repeats respectively. When including heat pulses [12] in the extension phase of the 3 0 RP-PCR assay, we incidentally observed that there was a lack of amplification of normal alleles in the presence of an expansion, and used this assay to specifically sequence the expanded allele. This allowed us to investigate whether the optimised RP-PCR conditions permitted amplification even in cases which had variability in the 3 0 end of the repeat, which has previously been reported to hamper PCR [11]. Out of 47 patients who tested positive for the expansion, we found that 31 of them had sequence which matched the reference sequence. This left a further 16 (34%) that had some form of insertion or deletion present at the 3 0 end of the repeat, as shown in Fig. 3. A potential limitation of RP-PCR is preferential amplification of normal sized products preventing amplification of large expansions, and to investigate this issue we performed admixture experiments for both 3 0 RP-PCR and 5 0 RP-PCR assays. Dilution of a heterozygous expanded carrier, in a heterozygous normal control with 2 and 5 repeats, showed that both assays could still detect an expansion even when only present at 1%, as shown in Fig. 4.
For 156/157 homozygous normal patients tested, results for the 3 0 and 5 0 RP-PCR assays were concordant. The one discordant result was apparently homozygous for 15 repeats on flanking PCR, and only showed an expansion using the 3 0 RP-PCR assay. Further analysis of heterozygous samples where one of the normal alleles was 15 repeats or longer revealed that the 3 0 RP-PCR assay does not drop to the baseline after the larger normal allele peak, unlike the 5 0 RP-PCR assay, as shown in Fig. 5. This effect is more pronounced the larger the normal allele is, as the stuttering is more likely to go into the affected range. Attempts to reduce this effect, by altering annealing temperature, reducing polymerase concentration and reducing cycle number failed to completely eliminate this PCR artefact, as they also resulted in an undesirable weaker trace for
Discussion
We have developed robust PCR based methods for detecting the HRE in C9orf72. The inclusion of betaine, along with Taq polymerase which is optimised for long, GC-rich regions and slow-ramping PCR cycling, all contribute to efficient PCR of this challenging genomic region. We have used our PCR methods to screen a cohort of 442 Scottish ALS patients for the HRE, as well in a clinical diagnostic setting for patients with ALS and FTLD.
The flanking PCR allows detection of alleles which are larger than have previously been reported using similar methods. Although the largest alleles were detected in cell line DNA which is not a source routinely used in a diagnostic setting, this gives an indication that the PCR is efficient and will be informative for blood samples with stable expansions of similar size. As much C9orf72 research is performed on cell lines, this technique could be used to monitor repeat stability in culture. The detection of repeats in the 70e120 repeat range by PCR and capillary electrophoresis allows a more accurate size to be assigned as compared to agarose gel electrophoresis and Southern blotting, which has a lower resolution and is also non-denaturing so more affected by secondary structure formation [17].
For clinical diagnostic testing, it is important to be aware of the common repeat sizes within the population as this can guide testing. Suspicion arises when a patient is apparently homozygous for a rare repeat size, particularly if these are in the 15e30 range which could hamper PCR amplification [11]. In our experience, sequencing of the flanking PCR products in apparently homozygous cases can also reveal normal alleles with genomic variability, which has also been reported by others [9].
The RP-PCR assays that we have developed appear to be higher yielding and produce a ladder of fragments corresponding with over 100 repeats, which is longer than has previously been published [1,2]. Importantly, our 3 0 RP-PCR assay has been shown to be robust even in the presence of a number of genomic variations next to the HRE. We detected a relatively higher degree of variation than has been reported in studies based on Southern UK populations, with a 10 bp deletion being reported commonly in Northern England [11]. The prevalence of C9orf72 expansions in the Scottish ALS population is similar to that reported in other population based series of ALS internationally [3,18].
The admixture experiments which were carried out, where expansions can be detected even when diluted to 1% in a normal background, suggest that these assays are not severely affected by preferential amplification, and the conditions seem to be optimal for PCR of longer fragments. We have observed that in the 3 0 RP-PCR, normal alleles of greater than 15 repeats can lead to a PCR artefact with low level expanded material being observed. This presumed primer-product or product-product interaction, leading to replication slippage can only be reduced by measures which also reduce the production of expanded material in HRE positive cases. Thus, this may be a limitation for product length for C9orf72 RP-PCR, as the 5 0 RP does not generate as large products and does not suffer from this artefact. The prevalence of false positive results in the study by Akimoto et al. [9] suggests that laboratories should be aware of such test limitations.
Conclusion
We would recommend testing using all three PCR assays in a clinical diagnostic setting, and ensuring there are concordant results prior to reporting. There may be rare cases (1/450 in this study) which are homozygous normal on flanking PCR and an expansion is only apparent in one RP-PCR assay, where reflex testing with Southern blotting may be necessary to obtain a result. Both RP-PCR assays should be used together to minimise the risk of any rare genomic variability, including single nucleotide polymorphisms under primer binding sites, from affecting the test accuracy. This is in line with recommendations for other repeat expansion disorders, such as Myotonic Dystrophy type 1 [19]. | 3,749.2 | 2016-08-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Single-step fabrication of fibrous Si/Sn composite nanowire anodes by high-pressure He plasma sputtering for high-capacity Li-ion batteries
To realize high-capacity Si anodes for next-generation Li-ion batteries, Si/Sn nanowires were fabricated in a single-step procedure using He plasma sputtering at a high pressure of 100–500 mTorr without substrate heating. The Si/Sn nanowires consisted of an amorphous Si core and a crystalline Sn shell. Si/Sn composite nanowire films formed a spider-web-like network structure, a rod-like structure, or an aggregated structure of nanowires and nanoparticles depending on the conditions used in the plasma process. Anodes prepared with Si/Sn nanowire films with the spider-web-like network structure and the aggregated structure of nanowires and nanoparticles showed a high Li-storage capacity of 1219 and 977 mAh/g, respectively, for the initial 54 cycles at a C-rate of 0.01, and a capacity of 644 and 580 mAh/g, respectively, after 135 cycles at a C-rate of 0.1. The developed plasma sputtering process enabled us to form a binder-free high-capacity Si/Sn-nanowire anode via a simple single-step procedure.
Results and discussion
Effects of the sputtering-target material and discharge gas on the Si nanostructure. Figure 1a, b show surface and cross-sectional scanning electron microscopy (SEM) images of films deposited via 100 mTorr high-pressure Ar and He plasma sputtering with an intrinsic Si target and a SiSn target with a Sn content of 6 at%, respectively.The film morphology markedly differed depending on the discharge gas.Nanostructured grains are observed on the surface of the Si films deposited with Ar discharge gas (left image of Fig. 1a), and a dense film structure is observed in the cross-sectional SEM image.However, the Si nanoparticle film deposited using He as the discharge gas exhibited a much greater roughness; the average nanoparticle size was estimated to be 60 nm from more than 20 randomly selected particles in the SEM image.A porous structure with abundant pores was clearly observed in the cross-sectional SEM image of the nanoparticle film (right image of Fig. 1a).We evaluated the porosity of the deposited Si nanoparticle films.The mass density ρ of the deposited films was measured and subsequently compared with the bulk Si density of 2.32 g/cm 3 .The porosity, calculated as ((2.32 − ρ)/2.32)×100(%), was as high as 17.6%.Here, a ρ of 1.91 g/cm 3 was estimated from the mass and volume of the deposited Si nanoparticle film; the film mass was derived from the difference in the substrate mass before and after the deposition process, and the film volume was estimated from the product of the deposition area and the film thickness obtained from the cross-section SEM image.
For the SiSn films in Fig. 1b, we observed a distinct transition from nanoparticle films to 1D nanowire films as a result of changing the discharge gas.Porous nanoparticle films were fabricated under Ar discharge gas, where the average nanoparticle size was as large as 202 nm and a cauliflower-like shape due to particle aggregation was observed.By comparison, in the film prepared under He discharge gas (right image of Fig. 1b), abundant Si nanowires longer than 10 μm were deposited in a wiggly and tangled form on the substrate in a single-step procedure, where the average diameter was 287 nm and the Sn content in the Si-nanowire films was 4.8%, as determined by SEM-energy-dispersive X-ray spectroscopy (EDX) analysis.As the He-gas pressure was increased from 100 to 300 and 500 mTorr, the diameter of the Si nanowires decreased from 287 to 155 and 136 nm, respectively (Fig. 2a).The thin Si nanowires exhibited a more fibrous form with a spider-web-like network.In the fibrous film deposited at 500 mTorr, numerous thinner wires were observed to branch from a main nanowire and a small sphere was observed at the top of the thinner wire (see the magnified region of the 500 mTorr SEM image in Fig. 2a).Interestingly, we also observed that Si nanowires vertically align on the substrate, as evident in the 300-mTorr cross-sectional image (Fig. 2a).
We also reduced the target-substrate distance z from 20 to 10 mm.When the z distance was reduced to 10 mm, the wire morphology changed from the fibrous form to the rod-like form with a larger average diameter of 605 nm and a longer inter-rod distance of a few micrometers (Fig. 2b).The SEM image of the Si rod-like film surface shows a cauliflower-like pattern and indicates that the rods are composed of aggregated nanoparticles.On the basis of morphological analysis by SEM, we identified three nanostructure patterns: (1) a porous nanoparticle film, (2) a fibrous 1D nanowire film with the spider-web-like network, and (3) a 1D rod-like film composed of nanoparticles.Thus, in the present study, we successfully fabricated 1D nanostructured films in a single step under the special plasma sputtering conditions of a SiSn(6 at%) target and high-pressure He discharge gas.
To clarify the detailed structure of the Si nanowires, we conducted TEM analysis.Figure 3a, b show TEM images of nanowires fabricated under He discharge-gas pressures of 100 and 300 mTorr, respectively.Elemental mapping images of wires were acquired by EDX, where blue and green indicate Si and Sn atoms, respectively.As evident in the TEM image of Fig. 3a (top), nanoparticles with sizes of 40-50 nm are arranged on the wire surface; the nanoparticles were shown by EDX analysis to be composed of Sn (green).Many Sn nanoparticles smaller than 40-50 nm were also observed on the Si core surface (blue), and the nanowire exhibited a core-shell structure consisting of a Si core covered with a Sn shell layer (middle image of Fig. 3a).We also carefully observed the top of the nanowire and found that the nanospheres were not concentrated at the tip but rather distributed randomly on the Si core surface (bottom image of Fig. 3a).
For the higher-pressure condition of 300 mTorr, thinner nanowires were found tangled with each other (top image of Fig. 3b).Interestingly, we observed that the thin nanowires had a neck structure composed of Si and Sn spherical parts and that the nanoparticles appeared to be arranged in a row, resembling beads (middle image of Fig. 3b).Similar to the nanowires formed under the 100 mTorr condition, nanowires formed under the 300 mTorr condition did not have nanospheres concentrated at their tip (bottom image of Fig. 3b).
Details of the crystal structure of the Si/Sn composite nanowire were obtained by X-ray diffraction (XRD) analysis.Numerous sharp signals were observed in the 2θ range from 25° to 70° in the patterns of both the 100-and 300-mTorr films (Fig. 3c); these peaks were assigned to the crystalline β-Sn phase, which exhibits high electrical conductivity.However, XRD peaks attributable to crystalline Si were not detected and the Si core was identified as amorphous.It is reasonable that the crystallization of Si under our experiment conditions without a heated substrate is difficult because of the melting point of Si (1410 °C), which is much higher than that of Sn (231 °C).On the basis of the whole structure analysis, we concluded that the nanowires consist of an amorphous Si core with a high Li-ion capacity and a crystalline Sn shell with high electrical conductivity.
Growth mechanism of Si nanowires.
To understand the growth mechanism in the single-step nanowire fabrication process that does not involve complicated preparation of metal catalyst nanodots, we conducted two additional experiments on the films to evaluate (1) the effect of the deposition time and (2) the effect of the Sn content.First, Fig. 4 shows SEM surface images of films deposited for 5, 10, and 20 min.In the SEM image of the film deposited for 5 min, spherical particles with various sizes, which appear white in the image, were observed.At 10 min after the start of deposition, wiggly shaped wires grew sparsely on the substrate and the spherical particles appeared to be the starting point of nanowire growth via a common VLS mechanism.After 20 min of deposition, a fibrous nanowire film as thick as ~ 20 μm was observed.During initial particle formation on the film, the solubility of Sn in the Si material is low according to the binary phase diagram; Sn therefore precipitates as dots from the SiSn mixed film [43][44][45][46] .The particles could be in the droplet state in our high-pressure He sputtering system because the thermal conductivity of He gas is one order of magnitude higher than that of Ar gas.Under He, the heat of the sputtering target is therefore efficiently transferred to the reaction field both in the gas plasma and on the substrate surface.To confirm this effect, we roughly measured the substrate surface temperature with a thermocouple probe, where the plasma and the He neutral gas directly flowed to the small tip surface of the sensor placed on the substrate.The thermocouple temperatures rapidly increased with increasing deposition time until the deposition time reached 4 min (i.e., 254 °C under He discharge and 98 °C under Ar discharge gas) and then remained almost constant.At the shorter z distance of 20 mm under highly thermally conductive He gas, the temperature of the sensor tip increased, which we attributed to (1) efficient heat transfer from the sputtering target at an RF input power of 15.7 W/cm 2 (80 W) via the flowing He gas and (2) physical and chemical reactions involving impinging ions from the plasma on the surface 47 .The melting point of Sn is as low as 231 °C, and the increase in temperature to greater than 254 °C explains the Sn droplet formation from a thermal perspective.In addition, the melting point of the nanoparticles is expected to decrease with decreasing particle size 48 .The low melting point of the Sn or Sn/Si nanoparticles plays an important role in automatic VLS growth without substrate heating; the precipitation and droplet-dot formation of Sn are followed by Si wire growth via continuous irradiation of Si and Sn atoms.
In addition to the heating effect attributed to He gas, the Sn content also affects the single-step fabrication of the 1D nanostructures.Figure 5a, b show SEM images of films deposited using targets with Sn contents of 3 and 10 at%, respectively.In the films deposited using the 3 at% Sn target and 100 mTorr of He gas, nanowires www.nature.com/scientificreports/are distributed with a very low density and clearly grow perpendicular to the substrate surface.This result is reasonable because a smaller amount of Sn atoms in the gas phase leads to a lower density of Sn catalyst dots on the substrate surface, and these dots are the trigger points of nanowire growth.Interestingly, we also found that nanowires clearly branch off and grow perpendicular to a main wire (see the magnified part of the 100 mTorr SEM image in Fig. 5a), which is evidence of secondary VLS growth on the Si-core surface.In our sputtering system, Si and Sn atoms are simultaneously supplied during the deposition process.As a result, the precipitation and droplet-dot formation of Sn, followed by Si-wire growth, occurs repeatedly not only on the substrate surface but also on the grown-wire surface.Thus, the proposed plasma process enables the formation of a fibrous 1D nanowire film with a spider-web-like network by random repetitive VLS growth.As the He-gas pressure was increased to 300 mTorr at a low Sn content of 3 at%, the wire structure was no longer observed; thus, an appropriate combination of He-gas pressure and Sn content is important to transition the nanostructure morphology from 3 to 1D.By contrast, the nanowire shape markedly changed at the higher Sn content of 10 at% (Fig. 5b); nanowires with a larger diameter and shorter length were formed when the gas pressure was 100 mTorr, whereas aggregation of high-density nanowires and nanoparticles was observed when the gas pressure was 300 mTorr.Abundant Sn atoms in the gas phase led to the formation of Sn dots with a larger diameter or a higher density on the surface, which created two unique wire structures of the extremes described above.
In Performance of Li-ion batteries with Si/Sn composite nanowire anodes.We tested the performance of Li-ion batteries with Si/Sn composite nanowire anodes.Figure 6a-c show the gravimetric capacity of a Si/Sn-nanowire film with the spider-web-like network structure, the nanowire-nanoparticle aggregation structure, and the rod-like structure, respectively, as a function of the number of charge-discharge cycles.The gravimetric capacity was calculated by dividing the observed capacity (mAh) of the Li-ion battery by the mass (g) of the Si nanowire film as the active anode material.The charge-discharge current was varied from a 0.01-C to a 5-C rate at certain cycle intervals.Here, a 1-C rate refers to the test current required to fully charge the battery in 1 h; the test current at a 0.01-C rate for a high-capacity Si anode (4200 mAh/g) is approximately the same as that at a 0.1-C rate for a commercial graphite anode (372 mAh/g).First, under a 0.01-C rate for a Si anode (4200 mAh/g), a high gravimetric capacity greater than 1000 mAh/g was observed for all the Si-anode cells, and a Coulombic efficiency greater than 95% was achieved in the first 4-10 cycles.Here, the Coulombic efficiency was calculated as the ratio between the integrated discharge current and the integrated charging current at each cycle.The charge-discharge voltage profile and corresponding differential capacity (dQ/dV) curves were analyzed in detail for the first 10 cycles.As shown in the voltage profiles (middle graphs of Fig. 6a, b), a plateau region was observed at ~ 2.3 V in the first charge cycle.In general, an organic electrolyte reacts with the surface of a Si anode to form a solid electrolyte interphase (SEI) layer during the initial operation at high voltage 54 .In the Si/Sn composite anodes composed of the narrower 155-and 133-nm diameter nanowires (Fig. 6a, b, respectively), extensive decomposition of the electrolyte proceeds on an abundant wire-surface area for the initial SEI formation, which leads to a continuous high potential in the constant-current first charging process 54,55 .By comparison, no plateau region was observed in the profile for the Si/Sn composite anode with the larger 605-nm diameter nanowires (Fig. 6c); the total surface area of the Si nanowire anode and the related SEI formation process influence the first stable voltage profile without the plateau.To suppress the observed high-potential plateau phenomena, a systematic investigation of the optimal wire diameter and density and the optimal film thickness for stable SEI formation is needed.
In the dQ/dV curves, distinct peaks appeared at 0.20-0.26V during Li alloying, and at 0.27-0.30V and 0.43-0.46V during Li de-alloying in the first 10 cycles; the phase transformation between amorphous Si and Li x Si was successfully repeated for all the Si nanowire anodes after the 10 cycles [55][56][57][58] .After 10 cycles, the battery cells with the spider-web-like network structure (a) and the wire-particle aggregation structure (b) showed very stable behavior, without capacity fading, for 54 cycles; the capacity retention was greater than 98%.For the cells with a rod-like structure (c), the highest capacity of 1700 mAh/g at cycle 5 decreased to 1174 mAh/g at cycle 54; the capacity retention was 68%.At 65 cycles, the charge-discharge current was increased tenfold (to 0.1-C) and the capacity decreased from 1193 to 1085 mAh/g for the spider-web-like network structure (a), from 973 to 646 mAh/g for the wire-particle aggregation structure (b), and from 1174 to 796 mAh/g for the rod-like structure (c).Finally, the battery cells with anodes prepared using the Si/Sn composite nanowires with the spider-weblike network structure (a), the wire-particle aggregation structure (b), and the rod-like structure (c) showed capacities of 644, 580, and 347 mAh/g after 199 cycles, respectively, and exhibited capacity retentions of 59%, 90%, and 43%, respectively, for 135 cycles at a 0.1-C rate.Notably, the capacity of 580-644 mAh/g observed for the cells with the spider-web-like network and aggregation structures was 1.5-1.7 times greater that for a commercial graphite anode (372 mAh/g).
To summarize the performance of the Li-ion batteries, the Si/Sn narrower nanowire anode with a spiderweb-like network structure (average wire diameter: 155 nm) and the wire-particle aggregation structure (average wire diameter: 133 nm) (Fig. 6a, b, respectively) show stable behavior at a 0.01-C rate for the first 54 cycles.By comparison, the performance of the Si/Sn nanowire anode with a rod-like structure (average wire diameter: 605 nm) markedly decreased with increasing number of charge-discharge cycles.This result is reasonable because McDowell et al. have pointed out that fracture can occur during lithiation of crystalline Si spheres larger than ~ 150 nm in diameter and during lithiation of crystalline Si pillars larger than ~ 300 nm in diameter 59 .When cycled at 0.1-C for 135 cycles, the wire-particle aggregation structure (Fig. 6b) showed the most stable behavior among the three anode materials, with almost no capacity fade.In conclusion, the aggregation structure composed of the narrowest nanowires and smallest nanoparticles used in our experiments showed the most stable cycle performance, with a high capacity of 580 mAh/g at a rate of 0.1-C.However, when the C-rate was increased to 0.2, 0.5, 1, 2, and 5 (left graph in Fig. 6a-c), the capacity decreased to 554, 336, 166, 36, and 1.54 mAh/g for the batteries with the spider-web-like-network structure (Fig. 6a), to 488, 310, 132, 6.95, and 0.7 mAh/g for the batteries with the wire-particle aggregation structure (Fig. 6b), and to 260, 124, 28, 3, and 0 mAh/g for the batteries with the rod-like structure (Fig. 6c), respectively.Further improvements of the nanowire structure and film morphology are necessary to achieve rapid charging in next-generation high-capacity Li-ion batteries.
To investigate the mechanism of relatively stable capacity behavior in the Si-nanowire anodes, we analyzed the anode materials after cycling tests by disassembling the cells.Figure 7a show SEM images of a Si/Sn composite nanowire anode with the spider-web-like network structure before and after 100 charge-discharge cycles.The average diameter of the nanowires increased from 165 to 474 nm after 100 cycles as a result of Li alloying.However, substantial pulverization of the Si nanowires was not observed and the physical strain was successfully relaxed by the 2.8-fold radical expansion of the 1D Si nanowires.Figure 7b shows a TEM image and EDX elemental mapping images of wires after 100 charge-discharge cycles, where blue and green indicate Si and Sn atoms, respectively.A Sn layer sill covered the Si wire core after 100 cycles.Notably, another layer was observed on the Si/Sn wire surface; the layer was composed of F atoms (red), P atoms (orange), and O atoms (pink).This layer is reasonably attributed to a chemical reaction between the Si/Sn nanowire surface and the organic electrolyte solution, which is composed of LiPF 6 dissolved in a mixture of ethylene carbonate (C 3 H 4 O 3 ) and diethyl carbonate (C 5 H 10 O 3 ).The outer layer is considered part of the SEI layer, although the anode sample was observed by SEM and TEM under the dry condition.
We conducted SEM and TEM analyses of Si/Sn nanowires with a 165-nm diameter in the spider-web-like network structure after 100 cycles.We considered that the wire-particle aggregation structure (wire diameter: 133 nm) was not significantly pulverized because the two anode materials showed similar cycle behavior (Fig. 6a, b).However, the rod-like structure (wire-diameter: 605 nm) might have been pulverized, as evidenced by its substantial decrease in capacity in the first 54 cycles (Fig. 6c), which is similar to the cycle behavior observed for an anode composed of two-dimensional Si thin films, which exhibited material fracture [60][61][62] .
The fabricated amorphous Si-core/Sn-shell nanowires have a morphology suitable for Li-ion-battery anodes from mechanical and electrochemical perspectives.To summarize the advantages of the Si-core/Sn-shell nanowires, Fig. 8 shows a schematic of the lithiation of a Si/Sn nanowire anode.First, the amorphous structure of the Si core exhibits more favorable behavior than a crystalline structure when reacting with Li 59 because amorphous Si nanomaterials without orientation dependence are lithiated isotropically, which reduces the physical stress and suppresses material fracture 59,63,64 .Second, the Sn layer around the Si-core surface enhances the lithiation rate 39,65,66 because Sn has a high electron conductivity of 10 4 S/cm and electrons critical for electrochemical reactions (Li alloying) are therefore supplied sufficiently to the Si core from the bottom Cu current collector via
High-electrical conductivity Sn
e -e -e -e - e - e - the Sn surface layer.Jiang et al. have reported that the electrical conductivity of amorphous SiSn films increases by six orders of magnitude when the Sn concentration in the films is increased from 0 to 10% 43 .We did not measure the electrical conductivity of the Si/Sn composite nanowire anodes.However, the Sn content in the Si/ Sn composite nanowire anode was approximately 5-10 at%, which is expected to be sufficient to improve the electrical conductivity of the Si nanowire anode.Figure 9 shows Nyquist plots of Si/Sn composite anodes, as obtained by electrochemical impedance spectroscopy (EIS), before the charge-discharge cycle.These plots consist of a semicircle in the high-frequency region and a straight line in the low-frequency region.The semicircle in the high-frequency region corresponds to the charge-transfer resistance associated with interfacial Li + ion transfer between the anode and the electrolyte.Among the three investigated anode materials, the rod-like structure (wire-diameter: 605 nm) exhibited the lowest charge-transfer resistance before the charge-discharge cycle.This low resistance contributes to the different electrochemical behavior, which is observed as a lack of a plateau potential in the first charge-discharge voltage profile (Fig. 6c, middle graph).The straight line in the low-frequency region corresponds to the Warburg impedance, which is the impedance associated with ion diffusion.Similar slopes were obtained for the three anode materials, suggesting that ion diffusion within the Si/Sn composite nanowires did not substantially differ among the investigated anode materials 62,67 .In addition, we emphasize that the fibrous spider-web-like network structure of the Si/Sn nanowires is advantageous for a Li-ion-battery anode because, even if one part of the wire is broken, the conductivity of the electrode is preserved by other joints, resulting in high capacity retention.
In conclusion, we fabricated high-capacity Si anodes with various nanostructures in a single-step procedure using plasma sputtering without substrate heating.The film morphology could be controlled from a 3D nanoporous to a 1D nanowire morphology by changing the discharge gas from Ar to He.In particular, 1D nanowire films were obtained under the special sputtering conditions of a SiSn (3-6 at%) target and He discharge gas at a high pressure of 100-500 mTorr.The 1D nanowires consisted of an amorphous Si core with high Li-ion capacity and a crystalline Sn shell with high electrical conductivity, which were suitable structures for Li-ion-battery anodes.The morphology of the Si/Sn nanowire films changed depending on plasma sputtering conditions such as the sputtering target-substrate distance z and the Sn content in the Si sputtering target, ranging from (1) a spider-web-like network structure with a wire diameter of 136-257 nm (z = 20 mm and Sn content = 6 at%) to (2) a rod-like structure with a rod diameter of 605 nm (z = 10 mm and Sn content = 6 at%) and (3) an aggregated structure of nanowires and nanoparticles (z = 20 mm and Sn content = 10 at%).We evaluated the charge-discharge cycle performance of the Si/Sn nanowire anodes in Li-ion battery cells.The Si/Sn nanowire anodes with the spider-web-like network structure and the nanowire-nanoparticle aggregation structure showed a high Listorage capacity of 1219 and 977 mAh/g, respectively, for the initial 54 cycles at 0.01-C and finally 644 and 580 mAh/g, respectively, after 135 cycles at 0.1-C.The developed low-temperature plasma sputtering process enabled the formation of a binder-free high-capacity Si/Sn-nanowire anode in a single step.
Methods
Some of the following explanations of methods overlap with descriptions in our previous papers 68,69 .The Si anode films were fabricated on a Cu disk using 13.56 MHz radiofrequency (rf) magnetron sputtering.The experimental setup was the same as described in our previous work (see Fig. 4 in Refs. 69).The Cu disk had a diameter and thickness of 15 mm and 80 μm, respectively, and was placed at the center of the substrate holder.The sputtering target was a polycrystalline intrinsic Si disk (1 inch diameter) with a purity of 99.99% or a Si disk with a Sn content of 3, 6, or 10 at%.An rf power of 15.7 W/cm 2 (80 W) was supplied to the sputtering target for plasma production.Ar or He gas was supplied from the direction of the target to the substrate holder at a flow rate of 16-94 sccm.The gas pressure was set to a high value of 100 to 500 mTorr.The distance between the target and the substrate holder was 10 or 20 mm.The substrate holder was not heated or cooled during film deposition.Si films deposited onto the Cu disk under various conditions were assembled into Li-ion battery cells (HS flat cell, Hohsen) as anodes with Li metal counter electrodes with diameters of 16 mm and thicknesses of 250 μm.A polypropylene separator with a diameter of 24 mm and a thickness of 24 μm was placed between the Si anode and Li cathode.A solution of 1 mol/L LiPF 6 dissolved in a mixture of ethylene carbonate (EC) and diethyl carbonate (DEC) (EC:DEC = 1:1 volume%) was used as an electrolyte in the battery test cells.The battery cycle performance was analyzed at a constant current of less than 1 mA, corresponding to a C-rate of 0.01 to 5, for all the charge-discharge cycles; the cut-off voltage was 0.03-2.0V.The cycle tests were conducted at room temperature using a battery test system (HJ1001SD8, Hokuto Denko).
To analyze the material properties of the Si anodes, Si films were deposited onto n-type Si wafers with a low resistivity under the same sputtering conditions used for the battery Si anodes.The crystal structure was evaluated by XRD (Rigaku SmartLab), and the surface morphology and cross-sectional microstructure were analyzed by SEM (SU-8010, Hitachi) and TEM (JEM-ARM200F, JEOL).
baFigure 1 .
Figure 1.Effects of sputtering-target material and discharge gas on Si nanostructured anodes.SEM surface and cross-sectional images of films deposited at a He or Ar gas pressure of 100 mTorr, where the distance between the sputtering target and substrate was 20 mm: (a) sputtering target of Si.(b) Sputtering target of SiSn with a Sn content of 6 at%.
Figure 2 .
Figure 2. Effect of He gas pressure and sputtering-target/substrate distance on Si nanowire anodes.SEM surface and cross-sectional images of films deposited using a SiSn sputtering target with a Sn content of 6 at%.(a) He gas pressure of 300 and 500 mTorr; the distance between the sputtering target and substrate was 20 mm.(b) He gas pressure of 300 mTorr; the distance between the sputtering target and substrate was 10 mm.
Figure 3 .
Figure 3. Structure analysis of Si/Sn composite nanowires by TEM and EDX.Nanowires were fabricated using a SiSn sputtering target with a Sn content of 6 at% and a sputtering-target-to-substrate distance of 20 mm.(a) TEM images of nanowires and TEM-EDX mapping images of Si (blue) and Sn (green); the He gas pressure was 100 mTorr.(b) TEM images of nanowires and TEM-EDX mapping images of Si (blue) and Sn (green); the He gas pressure was 300 mTorr.(c) XRD patterns of Si/Sn nanowires fabricated at a He gas pressure of 100 and 300 mTorr.
a similar study involving Sn catalyst dots, Yu et al. reported the growth of Sn-catalyzed VLS Si nanowires via SiH 4 -plasma CVD, where the SnO 2 -film substrate was first treated with H 2 plasma at 300 °C to form Sn nanodots and then Si atoms from SiH 4 plasma were supplied to the Sn droplet nanodots on the substrate at 300-600 °C49-53 .A major difference between our work and that of Yu et al. is the structure of the wires.In their 0.5 µm 0.5 µm 2 µm Deposition time: 5 min Deposition time: 10 min Deposition time: 20 min
Figure 4 .aFigure 5 .
Figure 4. Effect deposition time on Si/Sn composite nanowire anodes.SEM surface images of films deposited for 5, 10, and 20 min.The distance between the sputtering target and substrate was 20 mm under the condition of a He gas pressure of 300 mTorr; the Sn content of the SiSn sputtering target was 6 at%.
bFigure 6 .
Figure 6.Performance of Li-ion batteries with Si/Sn composite nanowire anodes.Cycle performance, chargedischarge curves, and corresponding dQ/dV curves for Li-ion batteries.(a) Anode of Si nanowires with a spider-web-like network structure.(b) Anode of Si nanowires with an aggregated structure of nanowires and nanoparticles.c Anode of Si nanowires with a rod-like structure.
Figure 7 .
Figure 7. Si/Sn composite nanowire anodes after charge-discharge cycles.(a) SEM surface images of the Si/Sn composite nanowire anode with a spider-web-like network structure before and after 100 cycles.(b) TEM image and TEM-EDX mapping images of the Si/Sn nanowire anode with a spider-web-like network structure after 100 cycles.
Figure 8 .
Figure 8. Advantages of the Si/Sn composite nanowire anodes.Schematic of the lithiated Si/Sn composite nanowire anode with a spider-web-like network structure.
Figure 9 .
Figure 9. Electrochemical characteristic of the Si/Sn composite nanowire anodes.Nyquist plots of Si/Sn composite anodes, as obtained by electrochemical impedance spectroscopy (EIS), before the charge-discharge process.
-step procedure, typical VLS growth resulted in Si nanowires with a vertically aligned structure with a Sn dot at the top.By contrast, in our single-step PVD procedure, random repetitive VLS growth results in Si/Sn composite nanowires with a spider-web-like network structure. | 6,741 | 2023-09-08T00:00:00.000 | [
"Materials Science"
] |
Long noncoding RNA LINC01132 enhances immunosuppression and therapy resistance via NRF1/DPP4 axis in hepatocellular carcinoma
Background Long noncoding RNAs (lncRNAs) are emerging as critical regulators of gene expression and play fundamental roles in various types of cancer. Current developments in transcriptome analyses unveiled the existence of lncRNAs; however, their functional characterization remains a challenge. Methods A bioinformatics screen was performed by integration of multiple omics data in hepatocellular carcinoma (HCC) prioritizing a novel oncogenic lncRNA, LINC01132. Expression of LINC01132 in HCC and control tissues was validated by qRT-PCR. Cell viability and migration activity was examined by MTT and transwell assays. Finally, our results were confirmed in vivo mouse model and ex vivo patient derived tumor xenograft experiments to determine the mechanism of action and explore LINC01132-targeted immunotherapy. Results Systematic investigation of lncRNAs genome-wide expression patterns revealed LINC01132 as an oncogene in HCC. LINC01132 is significantly overexpressed in tumor and associated with poor overall survival of HCC patients, which is mainly driven by copy number amplification. Functionally, LINC01132 overexpression promoted cell growth, proliferation, invasion and metastasis in vitro and in vivo. Mechanistically, LINC01132 acts as an oncogenic driver by physically interacting with NRF and enhancing the expression of DPP4. Notably, LINC01132 silencing triggers CD8+ T cells infiltration, and LINC01132 knockdown combined with anti-PDL1 treatment improves antitumor immunity, which may prove a new combination therapy in HCC. Conclusions LINC01132 functions as an oncogenic driver that induces HCC development via the NRF1/DPP4 axis. Silencing LINC01132 may enhance the efficacy of anti-PDL1 immunotherapy in HCC patients. Supplementary Information The online version contains supplementary material available at 10.1186/s13046-022-02478-z.
Background
Hepatocellular carcinoma (HCC) is a highly aggressive hepatic malignancy with poor survival rate [1]. The most common therapeutic is surgical resection or liver transplantation [2]. However, patients are usually diagnosed at an advanced stage and are not suitable for surgical treatment. Therefore, systematic investigation of the underlying mechanism associated with HCC development and progression is of high clinical significance and may lead the development of novel clinical options [3].
With the development of high-throughput sequencing technologies, numerous molecular markers have been linked to the development of HCC. Genes from various signaling pathways are frequently mutated in HCC, such as Wnt, P53, AKT and MAP kinase pathways [4]. Copy number alterations have been found to be involved in the development of HCC, such as MET and PEG10 amplification [5,6]. Cancer-related mutations have been found to perturb the RNA regulatory network and helped the identification of potential biomarkers [7]. However, the molecular pathogenesis of HCC is still not fully understood, and novel cancer-promoting genes must be identified and characterized.
Current progress in transcriptome analysis exposed a large portion of the human genome that does not encode for proteins [8]. Long noncoding RNAs (lncRNAs) have been discovered as a major type of regulatory RNA with important roles in cancer development [9]. Accumulating evidence has shown that lncRNAs are involved in a wide range of biological processes, acting as scaffolds or miRNA sponges [10,11]. For example, LINC01138 has been found to drive malignancies via activating arginine methyltransferase 5 in HCC [12]; to physically interact with the MYC protein and increase its stability in cancer [13]; and inflammation induced LINC00665 is involved in the NF-kB signaling activation in HCC [14]. Many lncRNAs were identified as novel regulatory RNAs in HCC but their biological functions and underlying mechanism in pathogenesis remain largely unclear.
The crosstalk between the immune system and tumor cells is critical in cancer development and progression [15]. Following the success story of immune checkpoint blockers (ICBs) therapy in various cancer types, increasing efforts were devoted to investigate novel ICBs approaches in HCC patients [16]. Despite this breakthrough, a subset of patients remains as non-ideal candidates for immunotherapy due to the lack of efficacy [17,18]. Therefore, numerous studies focused on modifying the expression of noncoding RNAs in combination with immunotherapy to improve the response and overall survival. LIMIT is an immunogenic lncRNA in cancer immunity targetable for cancer immunotherapy [19], while NKILA, another lncRNA, can promote tumor immune evasion by sensitizing T cells and activate induced cell death [20]. Furthermore, pan-cancer analysis of immune-related lncRNAs has prioritized cancer-related lncRNAs and the identification of immune subtypes [21]. Therefore, exploring the roles of lncRNAs in immune regulation is essential to identify additional immunotherapy targets in cancer.
In this study, we identified a new oncogenic long intergenic noncoding RNA (lincRNA) in HCC (LINC01132). LINC01132 is overexpressed in HCC, and significantly associated with malignant clinical features and poor outcomes in the clinic. Mechanistically, LINC01132 promotes cell growth, proliferation, invasion and metastasis through the LINC0132/NRF1/ DPP4 axis. Finally, combinatory therapy targeting LINC01132 inhibition and anti-PDL1 blockade synergistically improves antitumor immunity in HCC in vivo and ex vivo models. In summary, our data suggests that LINC01132 is a potential biomarker and therapeutic target for HCC.
Patients and ethical statement
In total, 121 HCC patients' tumor samples and corresponding adjacent normal liver tissues were obtained from the surgical specimen archives of the Zhongshan Hospital [12], Shanghai, China. The patients were informed, and signed consent forms acknowledging the use of their resected tissues for research purposes, which has been previously approved [12].
Cell culture
The HCC cell lines HepG2 and Hep1-6 were purchased from American Type Culture Collection (ATCC, Manassas, Virginia, USA) and cultured following the recommended guidelines. These cells were characterized by Genewiz Inc. cultured in Dulbecco's Modified Eagle's Medium (DMEM) (Thermo Fisher Scientific, California, USA) with 10% fetal bovine serum (FBS) and antibiotics. Huh-7 cells were was purchased from JCRB cell bank (Tokoyo, Japan) and cultured in Roswell Park Memorial Institute (RPMI) 1640 Medium (Thermo Fisher Scientific) with 10% new born calf serum. Hep3B and SUN-449 were purchased from the Cell Bank of Chinese Academy of Sciences (Shanghai, China). Cells were maintained in Dulbecco's modified eagle medium (DMEM, Invitrogen, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (FBS, HyClone, Logan, UT, USA), 1% penicillin/streptomycin (pen/strep, Invitrogen), and 8 mg/L of the antibiotic, tylosin tartrate, for mycoplasma (Sigma-Aldrich, St. Louis, Missouri, USA), at 37 °C in 5% CO2 (v/v). All cell lines were authenticated by autosomal STR profiling and thawed afresh every 2 months, to test for mycoplasma. None of the cell lines used was found in the database of commonly misidentified cell lines, maintained by the International Cell Line Authentication Committee.
In vivo mouse models
All studies were supervised and approved by the Shanghai University of Traditional Chinese Medicine Institutional Animal Care and Use Committee (IACUC). Female mice were used as models to study liver cancer. Power analysis indicated an n value of 5 mice per group to identify the expected effects with 90% confidence.
RNA quantization
Total RNA was extracted from liver samples or HepG2 or Huh-7 cells with TRIzol Reagent (Thermo Fisher Scientific, California, USA). Quantitative real-time PCR (qPCR) was performed with an iQ5 machine and SYBR Premix Ex Taq ™ II (TaKaRa Bio, Tokyo, Japan). Data were normalized to β-actin or IgG control (RNA pulldown assay and RIP assay). Relative genomic level of tumor tissues was compared with normal liver tissue. Primers used for RT-qPCR and RT-PCR are described in additional file 1: Table S1.
Cytoplasmic and nuclear RNA isolation
Cytoplasmic and nuclear RNA was extracted using Thermo Fisher BioReagents (Thermo Fisher Scientific) according to the manufacturer's instructions. QRT-PCR analysis was performed using SYBR ® Green Master Mix (Invitrogen, New York, USA) to amplify the localization of LINC01132 assay and β-actin U6 were used as cytoplasmic and nucleus controls. Primers are listed in additional file 1: Table S1.
In vitro cell proliferation and colony formation assays
The cell proliferation assay was measured using the Cell Counting Kit-8 (CCK-8) (Dojindo, Kyushu, Japan) at 1, 3, and 5 days after LINC01132 or mock infection. Cells were seeded into the 96-well plate at a density of 10 3 cells/well, and 10 μl of CCK-8 was added to 90 μl of the cell culture medium per well. Cells were subsequently incubated at 37 °C for 2 hours and the optical density measured at 450 nm. For the colony formation assay, 1-1.5 *10 3 cells/well were plated in a 6-well plate and incubated at 37 °C for 2 weeks. The colonies were fixed and stained with 0.1% crystal violet dye in 20% methanol, and the number of colonies macroscopically counted. All assays were performed in triplicate.
In vitro migration and invasion assays
Migration assays were performed in a Transwell chemotaxis 24-well chamber (BD Biosciences, Franklin Lakes, NJ). Briefly, 2 × 10 4 cells were plated in the upper chamber with a non-coated membrane. For the invasion assays, 5 × 10 4 cells were placed into the upper chamber with a Matrigel-coated membrane. After 16 hours of incubation at 37 °C, migrating or invading cells were fixed and stained with 0.1% crystal violet dye in 20% methanol. Migrated or invaded cells were counted and imaged with an inverted microscope (Olympus, Tokyo, Japan).
In vivo assays for metastasis
For the in vivo metastasis assays, 2 × 10 6 Hep1-6 cells infected with the pWPXL-LINC01132 or pWPXL-GFP were resuspended in 0.2 mL of serum-free DMEM and subcutaneously injected into C57BL/6 mice liver. After 40 days mice humanly euthanized and liver, lungs and intestines were collected, fixed with phosphate buffered neutral formalin and prepared for standard histological examination. The numbers of metastatic foci in liver tissue sections were counted by H&E staining under a binocular microscope (Leica, USA).
Construction of PDX mouse model of HCC
PDX model of liver cancer was established from patient's liver cancer samples. These were first sectioned into small tissue for subcutaneous tumor formation in nude mice. Two weeks later, the subcutaneous tumor bearing tissue was excised, sectioned and and transplanted again into the subcutaneous skin of nude mice. After subcutaneous tumor formation, shNC and shLINC01132 were injected subcutaneously and tumor diameter and weight were measured every 2 days. Animals were humanly euthanized after 30 days and tumor tissue removed for terminal endpoint analysis.
RNA pull-down assays
LINC01132 was transcribed in vitro with biotin RNA labelling mix and T7 RNA polymerase according to the manufacturer's instructions (Invitrogen). In total, 40 μl streptavidin-linked magnetic beads (ThermoFisher Scientific) were used to pull down the biotinylated RNA at room temperature for 2 hrs. The beads-RNA-proteins were then washed with 1× binding washing buffer (5 mM Tris-HCl, 1 M NaCl, 0.5 mM EDTA, and 0.005% Tween 20) four times. The proteins were precipitated and diluted in 60 μl protein lysis buffer. Finally, the retrieved proteins were measured on SDS-PAGE gels for mass spectrometry or Western blot. Western blot in RNA pull-down assay was performed with mouse anti-NRF1 and anti-KDM5B antibodies (Cell Signaling Technology, CST, Danvers, Massachusetts, USA, 1:500) and mouse anti-β-actin antibody (CST, 1:1000). Antibodies information is available on additional file 1: Table S2.
RNA-seq and computational analysis
RNA-seq was performed at the Sequencing and Non-Coding RNA Program at the RiboBio (Guangzhou, China) using Hiseq3000(Illumina, USA). The hisat2 [22], StringTie [23] and Ballgown were used to align the reads to the genome, generate raw counts corresponding to each known gene, and calculate the RPKM (reads per kilobase per million) values [24].
Northern blot
LINC01132 levels were measured by northern blot using an Ambion Northern Max-Gly Kit (Austin, TX, USA). Total RNA was electrophoresed and siphoned to a positively charged nylon membrane (NC). RNA was then fixed to the NC membrane using UV cross-linking. In brief, the cross-linked membrane was then prehybridized with ULTRAhyb, and RNA was detected with an LINC01132-specific oligonucleotide probe (primers provided in additional file 1: Table S1) labeled with digoxigenin-ddUTP using a DIG Oligonucleotide 3′-End Labeling Kit (Roche Diagnostics, Indianapolis, IN, USA) in roller bottles.
Western blot
Proteins were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to nitrocellulose membrane (Bio-Rad, Hercules, CA). Non-specific binding was blocked with 5% nonfat milk and subsequently incubated with indicated primary antibodies, followed by horseradish peroxidaseconjugated secondary antibodies. Immunoreactivity was visualized with chemiluminescence ECL reagents (Pierce, Rockford, IL) and imaged with ChemiDoc imaging system (12003153-s). Densitometry analysis was performed with Image-Pro Plus 6.0 (Media Cybernetics).
Mass spectrometry analysis
Specific bands were excised for proteomics screening by mass spectrometry analysis (Shanghai Applied Protein Technology, Shanghai, China). Protein identification was retrieved from the human RefSeq protein database (National Center for Biotechnology Information), using Mascot version 2.4.01 (Matrix Science, London, UK). The retrieved protein was detected by western blot.
RNA immunoprecipitation (RIP)
RIP experiments were performed using the Magna RIP ™ RNA-Binding Protein Immunoprecipitation Kit (Millipore, Massachusetts, USA) according to the manufacturer's instructions. The co-precipitated RNAs were detected by reverse transcription PCR. Total RNA (input controls) and normal mouse IgG controls were assayed simultaneously to validate RNA specificity to NRF1 and KDM5B (n = 3 for each experiment). Gene-specific primers for LINC01132 are provided in additional file 1: Table S1).
Co-immunoprecipitation
Huh-7 and HepG2 cells infected with the lentivirus expressing LINC01132 or Huh-7 cells transfected with LINC01132 siRNA were lysed with RIPA buffer (Beyotime Biotechnology) with protease (Thermo Fisher Scientific Inc.) and RNase (Thermo Fisher Scientific Inc.) inhibitors, and then centrifuged at 16,400 g for 15 min. Supernatants were collected and the amount of NRF1 and DPP4 or KDM5B and DPP4 protein examined by immunoblotting to normalized for DPP4 and NRF1 loading. Supernatants were then incubated with the indicated antibody-coated Protein G Dynabeads (Life Technologies) overnight at 4 °C with gentle rotation. The beads were washed five times with NT2 buffer (50 mM Tris-HCL [pH 7.4], 150 mM NaCl, 1 mM MgCl2, 0.05% Nonidet P-40) containing protease and RNase Inhibitor, and then three times with PBS again containing protease and RNase Inhibitor. After washing, proteins were eluted and immuno-complexes were analyzed by immunoblotting (antibodies provided in additional file 1: Table S2).
Genome-wide expression of lncRNAs in HCC
Genome-wide expression of lncRNAs in HCC was collected from Gene Expression Omnibus (GEO) and The Cancer Genome Atlas (TCGA). Gene expression profile of HCC was collected from GEO under the accession number GSE104310, which provided the expression of tumors and paired non-tumor tissues collected from Sun Yat-sen University Cancer Center. In total, eight normal and 12 tumor samples were included in our analysis. Moreover, genome-wide expression of liver hepatocellular carcinoma (LIHC) were obtained from TCGA project [25], which included 374 tumor and 50 normal samples. RNA sequencing of 21 Hepatitis B virus-HCC patients with non-neoplastic liver and tumor tissues were obtained from GSE94660 [26].
Identification of differentially expressed lncRNAs in HCC
Wilcoxon's rank sum test was used to identify differentially expressed lncRNAs in HCC tumor compared to normal tissues. Only long intergenic non-coding RNAs (lincRNAs) were considered in our analyses. We identified the lincRNAs with fold changes > 2 or < 0.5 and adjusted p-values < 0.05 as differentially expressed lincRNAs.
Predict the functions of lncRNA
Spearman correlation coefficients (SCC) were used to predict the function of lncRNA and protein-coding gene expression. All genes were ranked based on SCC and subjected into Gene Set Enrichment Analysis (GSEA) [27]. The Biocarta pathway dataset from MsigDB was used for this analysis [28]. Pathways with normalized enrichment score > 1.96 or < − 1.96 and false discovery rate (FDR) < 0.01 were identified.
Transcription regulation analysis
To identify the transcription factors (TFs) that bind to the promoter region of gene (e.g., DPP4), we queried the ChIPBase database [29]. The TFs than can bind to the 5 kb upstream to 1 kb downstream of transcript start sites were identified.
Integrative analyses identify onco-lncRNA LINC01132 in HCC
To identify the potential oncogenic lncRNAs in HCC, we first analyzed the genome-wide expression profile of 12 tumors and 8 adjacent non-tumor tissues. In total, we identified 22 up-regulated long intergenic noncoding RNAs (lincRNAs) and 35 down-regulated lincRNAs in HCC (Fig. 1A and additional file 1: Table S3). Numerous tumor associated lincRNAs were identified, such as PVT1 [30], HIF1A-AS1 [31] and MIAT [32]. Moreover, the expression of these lincRNAs can effectively distinguish the tumor tissue from the normal controls in HCC (Fig. 1B).
Next, we investigated the genetic alterations of differentially expressed lincRNAs. The majority of highly expressed lincRNAs were associated with copy number amplification in HCC (Fig. 1C). For example, approximate 11% of HCC patients had PVT1 CNV amplification, which might induce the expression of PVT1 in HCC. In the contrast, lower number of patients had copy number deletion for the down-regulated lincR-NAs (Additional file 2: Fig. S1A). We then focused on the lincRNA-LINC01132, which exhibited the second higher copy number amplification (7%) in HCC (Fig. 1C) and was highly expressed in HCC tissue. Moreover, LINC01132 exhibited higher copy number amplification across various cancer types (Additional file 2: Fig. S1B).
To further validate the oncogenic roles of LINC01132, we analyzed the expression in another two cohorts of HCC. We found that LINC01132 exhibited significantly higher expression in tumor patients (Additional file 2: Fig. S1C-D). We also evaluated the expression of LINC01132 in 121 paired tumor tissues and corresponding noncancerous tissues (NCT), showing that LINC01132 was highly expressed in cancer (Fig. 1D, p < 0.0001). Patients with more than one cancer embolus (Fig. 1E, p = 0.0032) or larger tumor size (Fig. 1F, p = 0.0321) had even higher LINC01132 expression levels. Finally, we explored the effects of LINC01132 on prognosis in HCC patients and observed that high LINC01132 expression was associated with poor survival of HCC (Fig. 1G, p = 0.0143). Altogether, these results suggested that LINC01132 plays an oncogenic role in HCC.
LINC01132 promotes HCC cell growth and metastasis in vitro
To investigate the possible roles of LINC01132 in HCC pathogenesis, we first explored the expression in HCC cell lines. We found that LINC01132 was with relatively high expressed in HepG2 and but not in Huh-7 (Additional file 2: Fig. S2A). We also observed that the expression of LINC01132 was in both cytoplasm and nuclear (Additional file 2: Fig. S2B), suggesting that it may play a regulatory function in both cellular localizations. Moreover, we found that LINC01132 didn't express to protein but functioned as RNA (Additional file 2: Fig. S2C-D). We next knocked down or overexpressed LINC01132 in HCC cell lines and performed colony and cell proliferation assays. We found that the knocked down or overexpressed LINC01132 can effectively alter its expression in cell lines (Additional file 2: Fig. S2E). LINC01132 knocked down significantly decreased the number of colony formation ( Fig. 2A, p < 0.001) while overexpression of LINC01132 significantly increased the number of colony formation (Fig. 2B, p < 0.001). Moreover, knockdown of LINC01132 significantly decreased cell proliferation (Fig. 2C, p < 0.001), while LINC01132 overexpression significantly promoted the cancer cells proliferation (Fig. 2D, p < 0.001).
We next explored the roles of LINC01132 in cell invasion and migration. LINC01132 knockdown significantly decreased cell migration and invasion (Fig. 2E-F, p-values < 0.001). In the contrast, overexpression of LINC01132 significantly increased cell migration and invasion (Fig. 2G, p-values < 0.001). In addition, we validated the biological roles of LINC01132 in another two cell lines (Hep3B and SNU-449). We found that the results were consistent across different cell lines (Additional file 2: Fig. S3). Taken together, these results demonstrated that LINC01132 significantly promotes in vitro cell growth, proliferation, migration and invasion in HCC.
LINC01132 promotes HCC cell growth and metastasis in vivo
To confirm the functions of LINC01132 on the tumorigenicity of HCC, LINC01132 knockdown cells and control cells derived from cell lines were subcutaneously injected into nude immunodeficient mice. Tumor xenografts derived from LINC01132-knockdown cells exhibited smaller volumes and lower weights than those from empty vector-transduced cells (Fig. 3A-C). In addition, we assessed the effects of LINC01132 on metastasis, observing that the numbers of metastatic nodules were significantly increased in LINC01132overexpressing groups (Fig. 3D, p = 0.0092).
The patient-derived xenograft (PDX) mouse model has been shown to recapitulate multiple characteristics of human cancer biological context. Therefore, we next silenced LINC01132 on PDX mouse model 7 days post (Fig. 3E). LINC01132 knockdown significantly inhibited cancer growth as shown by decreased tumor volumes and weights (Fig. 3E, p = 0.0028 and 0.039). As the transfection efficiency of plasmid vector was low, and adenovirus vector was constructed to validate the function of LINC01132. We found that silence of LINC01132 induced significantly decreased tumor volumes and weights (Additional file 2: Fig. S4). Together, these results indicated that elevated LINC01132 expression may contribute to the development and progression of HCC.
LINC01132 potentially regulates DPP4 in HCC
To elucidate the potential molecular mechanisms of LINC01132 in HCC, we calculated the correlation between protein expression and expression of LINC01132 in TCGA HCC cohort [33]. LINC01132 expression was significantly positively correlated with showing the growth of HCC cells treated with siLINC01132 (C) or overexpression of LINC01132 (D). E-G Transwell migration and invasion assays in HCC cell lines treated with siLINC01132 or overexpression of LINC01132. E for migration treated with siLINC01132, F for invasion treated with siLINC01132 and G for migration and invasion treated with overexpression of LINC01132 SETD2, PRDX1 and CD26, while negatively correlated with AKT and BRD4 (Fig. 4A). Deficiency of histone methyltransferase SET Domain-Containing 2 (SETD2) in liver has been demonstrated to lead to abnormal lipid metabolism and HCC [34]. Similarly, PRDX1 can act as a pro-cancer protein in HCC HepG2 cells [35]. The expression of LINC01132 was consistently correlated with CD26 protein and RNA expression in HCC (Fig. 4B). CD26/dipeptidyl peptidase (DPP) 4 is a membranebound protein found in many cell types and has been suggested as a potential biomarker and target for cancer therapy [36].
Next, we performed GSEA analysis based on the expression correlation between LINC01132 and all protein coding genes and found that LINC01132 expression was positively correlated with EIF2 and ETC pathways, while negatively correlated with TCRA and NKT pathways (Fig. 4C). EIF2 pathway has been associated with various types of cancer, including HCC [37,38]. The observed negative correlation with immune-related Moreover, we in two cell lines and performed RNA-Seq for genome-wide expression profiling. Potential correlated genes such as PRDX1, DPP4 (CD26) and AKT2 exhibited significant expression changes in, LINC01132 knockdown cells (Fig. 4D). GSEA analysis revealed that highly expressed genes LINC01132 knockdown cells were significantly involved in COMP and STEM pathways ( Fig. 4E-F, false discovery rates = 0.030 and < 0.001). In particular, we found that LINC01132 knockdown increased the expression of numerous immune-related genes, such as CD8A, IL6 and IL11 (Fig. 4F). We further investigated the expression levels of DPP4 in LINC01132-knockdown or LINC01132-overexpressing HCC cells. DPP4 transcriptional and protein levels were increased LINC01132-overexpressing and decreased in LINC01132-knockdown HCC cells, respectively (Fig. 4G-H). These results suggest that LINC01132 might exert its oncogene functions by modulating the DPP4 signaling pathway.
LINC01132 interferes with NRF1 binding to DPP4 in HCC
To further explore the molecular mechanism underlying the oncogenic activity of LINC01132 in HCC, we performed RNA pull-down assay to identify the proteins associated with LINC01132. LINC01132 pull-down identified 598 potential interacting proteins (Fig. 5A). Of these proteins, NRF1 and KDM5B, and one transcription cofactor (CDK8) have binding sites around the transcription start site (TSS) of DPP4 gene (Fig. 5A). Moreover, we confirmed that sense but not antisense LINC01132 was specifically associated with NRF1 and KDM5B (Fig. 5B-C). RIP assays further showed that NRF1 and KDM5B antibodies could significantly enrich LINC01132 whereas the GAPDH antibody and IgG control could not (Fig. 5D-E). The expression of NRF1 and KDM5B was then investigated showing that higher grade HCC patients exhibited significantly higher expression of NRF1 (Additional file 2: Fig. S5A-B). Survival analysis revealed that poor survival of patients with higher expression of NRF1, KDM5B, LINC01132 and DPP4 (log-rank p = 0.038, Additional file 2: Fig. S5C).
Furthermore, we retrieved the public chromatin immunoprecipitation (ChIP) sequencing data from ChIPBase [29] and found that the NRF1 and KDM5B can bind to the promoter region of DPP4 across cell lines ( Fig. 5F and Additional file 2: Fig. S6). Additionally, LINC01132 knockdown significantly blocked the interaction between NRF1 and DPP4 (Fig. 5G). However, the effects were not observed for KDM5B. These results indicated that LINC01132 acts as a scaffold in the interaction between NRF1 and DPP4.
LINC01132 improves the response to anti-PD1 immunotherapy in HCC
Previous studies have identified DPP4 decreased chemokines and other immune molecules [39]. Our data revealed that LINC01132 overexpression was significantly associated with decreased immune pathway activity. We also identified a negative relationship between DPP4 and CD8+ T cell infiltration levels in HCC tissues (Fig. 6A). Combination therapy with anti-PDL1 blockade immunotherapy and other therapies have been shown to improve the efficacy of the tumor-specific T-cell response [40,41]. Based on the above results of LINC01132, we predicted binding to DPP4 promoter region. B SDS-PAGE staining results of LINC01132 pull down assay. C Immunoblotting for the specific associations of NRF1 or KDM5B with biotinylated-LINC01132 from streptavidin RNA pull-down assays. D-E RIP assays were performed using the indicated antibodies. Real-time PCR was used to detect LINC01132 enrichment, using IgG antibody as the control. D for NRF1 and E for KDM5B. F Integrative Genomics Viewer of NRF1 binding around the TSS of DPP4 in HCC cell lines. G Immunoblotting of NRF1 and KDM5B in samples from DPP4 RIP assays with LINC01132 overexpression and knock down that LINC01132 knockdown could enhance lymphocyte trafficking and improve tumor responses to PDL1 blockage in HCC. Thus, we investigated the combination therapy of LINC01132 knockdown and PDL1 inhibitor in the Hep1-6-shLINC01132 tumor model (Fig. 6B).
We found that shLINC01132 therapy resulted in delayed tumor growth, smaller tumor volume and weight (Fig. 6B-D). Importantly, much clearer tumor regression was observed in the group treated with shLINC01132 plus anti-PDL1 (Fig. 6B-D). The number of CD8+ T cells was significantly increased in the tumors of mice treated with shLINC01132 compared with control (Fig. 6E, p = 0.012). Moreover, the increase was even higher in the tumor of mice treated with the shLINC01132 and PDL1 blockage (Fig. 6E, p = 0.0098). Thus, the above results further indicated that LINC01132/NRF1/DPP4 axis is involved in the immunosuppression of HCC (Fig. 6F) and suggested that knockdown LINC01132 could improve the efficacy of PDL1 blockage immunotherapy.
Discussion
Rapid progresses in high throughput sequencing technologies have successfully identified a large number of lncRNAs. Moreover, emerging evidence has indicated that lncRNAs are expressed in a cell type-specific or tissue-specific patterns [42,43], suggesting important roles in diverse biological processes. Expression perturbation of lncRNAs has been observed in various cancer types. By integration analysis of genome-wide lncRNA expression and genetic alterations, we revealed that LINC01132 is up-regulated in HCC tissues and that its high expression might be driven by copy number amplification. Moreover, overexpression of LINC01132 was associated with poor prognosis of HCC patients.
LINC01132 promoted cell growth, proliferation, invasion and metastasis in vitro and in vivo. Functional analysis revealed that LINC01132 physically interacted with NRF1 and KDM5B and promoted the expression of DPP4. To date, expression perturbation of LINC01132 has only been associated with oncogenic activities in ovarian cancer by regulating miR-431-5p/SOX9 axis [44] and involved in hypoxia regulation in glioblastoma [45]. Therefore, the roles of LINC01132 in the regulation of NRF1/DPP4 axis in HCC are here first described. These results suggested that the same lncRNA can play diverse functions in regulating various pathways in different tumor context. CD26/DPP4 is a membrane-bound protein, and its higher expression has been found in a wide variety of tumor pathologies [36]. CD26 expression was significantly increased in tumor HCC specimens and was associated with larger tumor size [46]. CD26 has been proven as a pro-oncogenic gene in HCC and a potential therapeutic target. We also found that DPP4 interacts with number of cancer-related genes (Additional file 2: Fig. S7). Moreover, DPP4 inhibition improves antitumor effect of PD1 in HCC by enhancing CD8+ T cell infiltration [47]. It is increasingly clear that there are widespread changes in lncRNA expressions during the immune response. Numerous lncRNAs, such as NEAT1, UCA1, MIR22HG, and LINK-A, have involved in immune regulation in cancer [48][49][50]. Here, we demonstrated the inhibition of LINC01132 can achieve the similar antitumor effects. Altogether we expose the critical role of LINC01132/NRF1/DPP4 promoting the development of HCC.
Conclusions
In conclusion, our results demonstrated that LINC01132 may act as an onco-lncRNA and overexpression of LINC01132 promoted HCC development via the NRF1/ DPP4 signaling axis. Contrastingly, LINC01132 silencing may be a novel synergistic strategy to improve the efficacy of PDL1 inhibitor therapy in a subgroup of ICB resistant HCC patients. | 6,568.8 | 2022-09-08T00:00:00.000 | [
"Medicine",
"Biology"
] |
Model of phenotypic evolution in hermaphroditic populations
We consider an individual based model of phenotypic evolution in hermaphroditic populations which includes random and assortative mating of individuals. By increasing the number of individuals to infinity we obtain a nonlinear transport equation, which describes the evolution of phenotypic distribution. The main result of the paper is a theorem on asymptotic stability of trait distribution. This theorem is applied to models with the offspring trait distribution given by additive and multiplicative random perturbations of the parental mean trait.
Introduction
This paper studies the evolution of phenotypic traits in hermaphroditic populations, i.e. populations where every individual has both male and female reproductive system. A great part of these populations has formed various defense mechanisms against self-fertilization (autogamy) to guarantee genetic diversification (e.g. a proper shape of a flower can inhibit self-pollination in some species of plants). In that case, individuals can only mate with others to copulate and cross-fertilize. Nonetheless, crossfertilisation occurs in some hermaphroditic species. Hermaphroditic populations are plentiful among both water and terrestrial animals as well as plants. Some of examples include Sponge (Porifera), Turbelleria, Cestoda (Cestoidea), Lumbricidae, some of mollusks such as sea slug Blue Dragon (Glaucus atlanticus) and various kinds of land snails or majority of flowering plants (angiosperms).
A considerable amount of literature has been published on modelling asexual populations by means of microscopic description of trait evolution. Macroscopic approximations of that models were derived in the forms of deterministic processes or superprocesses (see Champagnat 2006;Champagnat et al. 2008;Fournier and Méléard 2004;Ferrière and Tran 2009;Méléard and Tran 2009). In this paper we formulate an individual based model to describe the phenotypic evolution in hermaphroditic populations. We consider a large population of small individuals characterized by their traits. The traits are assumed to be unchanged during lifetime, and their examples include skin colour, the shape of a leaf and shell pattern. All the individuals are capable of mating or self-fertilizing to give birth to an offspring.
We consider a general model of mating, which includes both random and assortative mating. The first particular case is a semi-random mating model. This model is based on the assumption that each individual has an initial capability of mating depending on its trait. This mating model is similar to models describing aggregation processes in phytoplankton dynamics (see Arino and Rudnicki 2004;Rudnicki and Wieczorek 2006a, b). The second particular case is an assortative mating model. In this model, the individuals with similar traits mate more often than they would choose a partner randomly. We adapt a model based on a preference function Doebeli et al. (2007), Gavrilets and Boake (1998), Matessi et al. (2001), Polechová and Barton (2005), Schneider and Bürger (2006), Schneider and Peischl (2011), usually used in two-sex populations, to the hermaphrodites. The consequence of mating or self-fertilization is a birth of a new individual. The trait of this individual is given by a random variable that depends only on traits of the parents. Each individual can die naturally or when competing with others. We consider a continuous time model, and we assume that all above-mentioned events happen randomly.
The model presented in this paper is a hermaphroditic analogue of the asexual model introduced by Bolker and Pacala (1997), Law and Dieckmann (2002) and studied by Fournier and Méléard (2004). Despite the vast literature concerning individual based models and their macroscopic approximations, only a few models involving mating processes have appeared so far (Collet et al. 2013;Remenik 2009).
One of our aims is to study a macroscopic deterministic approximation of the model. We obtain it by increasing the number of individuals in the population to infinity, with simultaneous decrease in the mass of each individual. After suitable scaling of parameters, the limit passage leads to an integro-differential equation. Solutions of the equation describe the evolution of trait distribution. We also study the existence and uniqueness of the solutions. We investigate extinction and persistence of the population and convergence of its size to some stable level.
The main aim of our paper is to prove asymptotic stability of trait distribution. Asymptotic behavior of the solutions is characterized by conservation of mean phenotypic trait. We apply our main theorem to two specific models. In these models the offspring trait is the parental mean trait randomly perturbed by some external environmental effects or genetic mutations. In the first model, the noise is additive. The property of additivity allows us to derive a formula for the stationary phenotypic distribution. The second model contains multiplicative noise, and it includes, as a special case, the Tjon-Wu version of the classic Boltzmann equation (see Bobylev 1976;Krook and Wu 1977;Tjon and Wu 1979). The Tjon-Wu equation describes the distribution of energy of particles. As a by-product of our investigation we give a simple proof of the theorem of Lasota and Traple (see Lasota 2002;Lasota and Traple 1999) concerning asymptotic stability of this equation. Addtionally, an example of the trait reduction is given. In this case, in a long period of time, all traits reduce to a particular one, which is the mean trait of the initial population.
The scheme of the paper is following. In Sect. 2 we collect all assumptions on the dynamics of the population. In Sect. 3 we introduce a stochastic process corresponding to our individual based model. Section 4 is devoted to the macroscopic approximation, the limiting equation and its solutions. In particular, we give results about extinction and persistence of the population and stabilization of its size. In Sect. 5 we formulate the results concerning the asymptotic stability, and we give examples of their applications. Finally, in the last section we discuss problems for future investigation concerning assortative mating models.
Individual-based model
Let us fix a positive integer d. We assume that every individual is described by a phenotypic trait x, which belongs to some closed and convex subset F of R d , whose interior is nonempty. The trait of an individual does not change during its lifetime.
Random mating
In sexually reproducing populations a mating process highly depends on a given species. We will consider both random and assortative mating. In classical genetics individuals mate randomly-the choice of partner is not influenced by the traits (panmixia). Random mating occurs often in plants, but it is also observed in some hermaphroditic animals (Baur 1992). We study a semi-random mating model in which the mating rate depends on the trait. An individual described by the trait x is capable of mating/self-fertilizing with rate p(x), where p is a positive function of the trait.
Consider a population which consists of n individuals with traits x 1 , . . . , x n . Since two different individuals may have the same trait, it is useful to describe the state of the population as the multiset We recall that a multiset (or bag) is a generalization of the notion of a set in which the members are allowed to appear more than once. We suppose that any individual can mate with an individual of trait x j with the following probability Thus the mating rate of individuals with traits x i and x j is given by . (1) The figure m(x i , x i ; x) is a self-fertilization rate. In the case of populations without self-fertilization we assume that if i = j and m(x i , x i ; x) = 0. Let us observe that in both cases the mating rate is a symmetric function of x i and x j but only in the first case we have n j=1 m(x i , x j ; x) = p(x i ). If we pass with the number of individuals to infinity, and replace the discrete model by the infinitesimal model with trait distribution described by a continuous measure μ, then the mating rate in both cases is given by
Assortative mating
Now we consider models with assortative mating, i.e. when individuals of the similar traits mate more often than they choose a partner randomly. Assortative mating can be modelled in different ways. For example one can use matching theory, according to that, each participant ranks all the potential partners according to its preferences, and attempts to pair with the highest-ranking one (Almeida and de Abreu 2003;Puebla et al. 2012). Such models are very interesting but difficult to analyze. The most popular models of assortative mating are based on the assumption that a random encounter between two individuals with traits x and y depends on a preference function a(x, y) (Doebeli et al. 2007;Gavrilets and Boake 1998;Matessi et al. 2001;Polechová and Barton 2005;Schneider and Bürger 2006;Schneider and Peischl 2011). We consider only the case when all the individuals have the same initial capability of mating p(x) = 1.
Usually, it is assumed that a(x, y) = ϕ( x − y ), where ϕ : [0, ∞) → [0, ∞) is a continuous and decreasing function. It means that if the population consists of n members with traits x 1 , . . . , x n , then individuals of traits x i , x j mate with rate Note that in general, the function m is not symmetric in x i and x j , and usually it describes mating in two-sex populations. Then the first argument in m refers to a female. Females are assumed to mate only once, whereas males may participate in multiple matings. We have n j=1 m(x i , x j ; x) = 1 for each i, which means that all females mate with the same rate. The mating rate in the infinitesimal model is of the form While considering hermaphroditic populations, one can expect a model with a symmetric mating rate. We obtain such a model assuming that the mating rate is of the form where a(x, y) is a symmetric nonnegative preference function, e.g. a(x, y) = ϕ( x − y ) (in the case of populations without self-fertilization we eliminate the terms with i = l and j = l from the denominators). The mating rate in the infinitesimal model is now of the form In the rest of the paper we will assume that the mating rate m(x i , x j ; x) is of the form (1) or (6).
Birth of a new individual
After mating/self-fertilization an offspring is born with probability 1. The trait of the offspring is drawn from a distribution K (x i , x j , dz), where x i and x j are parental traits. We suppose that for every x, y ∈ F the measure K (x, y, ·) is a Borel probability measure with support contained in the set F, and assume that there exist positive constants c 1 , c 2 , c 3 such that F |z|K (x, y, dz) ≤ c 1 + c 2 |x| + c 3 |y|, and F zK (x, y, dz) = x + y 2 .
The above condition has a simple biological interpretation, namely, the expected offspring's trait is the parental mean trait. Moreover, we suppose that for every x, y ∈ F and for every Borel set A ⊂ F and the function is measurable.
Competition and death rates
An individual from the population can die naturally or when competing with others. Let us denote by I (x i ) the rate of interaction of the individual with trait x i . We assume that I is a nonnegative function. For individuals with traits x i and x j we define a competition kernel U (x i , x j ), which is assumed to be a nonnegative and symmetric function. Competition always leads to death of one of the competitors. We assume that the natural death rate of the individual with trait x i is expressed by a number D(x i ), and suppose that D is a nonnegative function.
The dynamics of the population
We present the dynamics of the ecological system that we are interested in. The process starts at time t = 0 from an initial distribution. Individuals with traits x i and x j can mate at rate m(x i , x j ; x) of the form (1) or (6). After mating an offspring is born with probability one. An individual with trait x i dies naturally at rate D(x i ) or by competition at rate , where the sum extends over all living individuals at time t, and X j (t) are their traits. We assume that all the events (mating, natural death, competition) are independent.
The phase space
We denote by N the set of all positive integers, δ x stands for the Dirac measure concentrated at a point x, and 1 A denotes the indicator function of a set A. We consider the space M(F) of all finite positive Borel measures on F equipped with the topology of weak convergence of measures. We introduce the set M ⊂ M(F) of the form For any measure μ ∈ M(F) and any measurable function for the Skorokhod space of all cad-lag functions from the interval [0, ∞) to the set M (see for details, e.g., Ethier andKurtz 1986, Skorokhod 1956).
Generator of the process
We consider a continuous time M-valued stochastic process (ν t ) t≥0 with the infinitesimal generator L given for all bounded and measurable functions φ : M → R by the formula The first term in the right-hand side describes the mating and birth processes with the dispersal of traits. The second term stands for two kinds of death. The death part of the generator was previously studied in Fournier and Méléard (2004). Notice that, on the contrary to Fournier and Méléard (2004), the operator L has the first term nonlinear with respect to ν.
We assume that there are positive constants a, a, D, I , U such that for every Under the above assumptions, if the initial measure ν 0 ∈ M satisfies E ν 0 , 1 q < ∞ for some number q ≥ 1, then E sup 0≤t≤T ν t , 1 q < ∞ for any T < ∞. Consequently, the standard approach of Fournier and Méléard can be easily applied to prove the existence of the Markov process (ν t ) t≥0 with the infinitesimal generator given by formula (13) (see Fournier andMéléard 2004, Remenik (2009) for detailed proofs).
Macroscopic approximation
This section contains an approximation of the process which was introduced and studied in the previous sections. The idea is to normalize the initial model and pass with the number of individuals to infinity, assuming that the "mass" of each individual becomes negligible. In this approximation mating and death rates remain unchanged. Only the intensity of interaction is rescaled, and tends to 0 with an unbounded growth of the population. This approach leads to a deterministic nonlinear integro-differential equation whose solutions describe an evolution of trait distribution.
We consider a sequence of populations indexed by numbers N ∈ N. If the N th population consists of individuals The N th population is described by a process (ν N t ) t≥0 which is defined in the same way as the process (ν t ) t≥0 but with the corresponding coefficients. We define the The generator L N of the process (μ N t ) t≥0 is given by for any measurable and bounded map φ : M N → R.
Theorem 1 We assume that condition (14) holds, and the functions a, p, k, D, I, U are continuous. We suppose that E μ N 0 , 1 q < ∞ for some q ≥ 2 and all N ∈ N, and almost each sequence (μ N 0 ) N ∈N converges weakly to a deterministic finite measure for every bounded and measurable function f : F → R.
The standard proof of the above theorem is based on Ethier and Kurtz (1986) Corollary 8.16, Chapter 4, and since mating is described by the Lipschitz continuous operator on the space of positive and finite Borel measures with total variation norm, it can be directly adapted for example, from Rudnicki and Wieczorek (2006a).
Strong solutions in the space of measures
According to Theorem 1 the solutions of (15) are continuous in the topology of weak convergence of measures. In this part we show a stronger result that they are also continuous in the total variation norm ν We use the following formal notation of equation (15) in the space of positive, finite Borel measures M(F) on the set F with the total variation norm.
Theorem 2 Assume that the functions a, p, D, I, U and (11) are measurable, and condition (14) holds. Moreover, suppose that there exist positive constants p, for all x, y ∈ F. If μ 0 ∈ M(F), then there exists a unique solution μ t , t ≥ 0, of Eq. (16) with the initial condition μ 0 . The function t → μ t is bounded and continuous in the norm · T V .
Proof Let us fix T ≥ 0, δ > 0 and consider the space where μ T ∈ M(F) is some measure. Notice that from assumption (14) there are constants m,m depending on p in the case of semi-random mating, and on a, a in the case of assortative mating, such that for any y ∈ F and measures μ, Take functions μ · , ν · ∈ C T from the ball B(0, 2 μ T T V ). Then and Taking δ > 0 sufficiently small, from (20) and (21) it follows that transforms the ball B(0, 2 μ T T V ) into itself, and is Lipschitz continuous with some constant L < 1. By the Banach fixed point theorem the operator has a unique fixed point and, consequently, (16) has a unique local solution.
In order to extend a local solution to the whole interval [0, ∞) it is sufficient to show that the solution is bounded. Notice that and consequently which completes the proof of the global existence and uniqueness.
Eventually, we show that the measure μ t is positive for t > 0. Indeed, for any Borel A straight-forward conclusion is the following statement about solutions in L 1 space.
Corollary 1 Suppose that for each x, y ∈ F the measure K (x, y, dz) is absolutely continuous with respect to the Lebesgue measure and Under the assumptions of Theorem 2, if μ 0 has a density u 0 ∈ L 1 with respect to the Lebesgue measure, then μ t also has a density u(t, ·) ∈ L 1 and u(t, z) is the unique solution of the following equation with the initial condition u(0, ·) = u 0 (·).
Proof Take a Borel set A with zero Lebesgue measure. Since the measure K (x, y, dz) is absolute continuous with respect to the Lebesgue measure, K (x, y, A) = 0 for every x, y ∈ F, and consequently d dt μ t (A) ≤ −D μ t (A), and μ 0 (A) = 0. Therefore μ t (A) = 0 for all t > 0, and the statement comes from the Radon-Nikodym theorem.
Boundedness, extinction and persistence
From the proof of Theorem 2 it follows that the function M(t) = μ t (F) is upperbounded. Now we analyze further properties of M(t). Let us recall that a population becomes extinct if lim t→∞ M(t) = 0, and is persistent provided lim inf t→∞ M(t) > 0.
Proposition 1 If inf z D(z) ≥ sup z p(z) in the case of random mating and inf z D(z) ≥ 1 in the case of assortative mating, then the population becomes extinct. If sup z D(z) < inf z p(z) in the case of random mating and sup z D(z) < 1 in the case of assortative mating, then the population is persistent.
Proof In the case of random mating these properties follow simply from the inequalities In the case of assortative mating we use similar inequalities withp and p replaced by 1.
Equation on a global attractor
In order to describe more precisely asymptotic behavior of M(t), we need to assume that the functions p, D, I, U do not depend on x, and are positive. To avoid extinction of the population, we additionally assume that D < p in the case of random mating and D < 1 =: p in the case of assortative mating (see Proposition 1). Then M(t) satisfies the following equation is a stationary solution of this equation. Using basic facts from the theory of differential equations, it is easy to see that The numberM is an analogue of carrying capacity studied in Bolker and Pacala (1997) and Fournier and Méléard (2004). In our caseM is a number of individuals per unit of volume after long time. From (25) it follows that all positive solutions converge to the set which is invariant with respect to Eq. (16), i.e., if an initial condition μ 0 belongs to A, then μ t ∈ A for t > 0. It means that A is a global attractor for Eq. (16). If μ 0 ∈ A then the function t → μ t satisfies the following equation Let μ t be a positive solution of (16). If we substituteμ t =M M(t) μ t , then the function t →μ t also satisfies (26). Therefore, the long-time behaviour of solutions of (26) is completely characterized by the dynamics on the attractor A.
Now we consider only solutions on the set A. If we replace μ t (dx) byMμ t (dx) and t by pt in (26), then μ t (dz) becomes a probability measure for all t ≥ 0. The function t → μ t satisfies in the case of random mating, and in the case of assortative mating, where .
General remarks
In this section we study the convergence of solutions of Eq. (27) to some stationary solutions. Equation (27) can be treated as an evolution equation where the operator P acting on the space of all probability Borel measures on F is given by the formula The solution of (29) with an initial measure μ 0 is the deterministic process μ t given by Theorem 2. The set O(μ 0 ) := {μ t : t ≥ 0} is called the orbit of μ 0 . Since the problem of the asymptotic stability of the solutions of Eq. (29) in an arbitrary d-dimensional space seems to be quite difficult, we consider only the case when d = 1 and F is a closed interval with nonempty interior. Generally, Eq. (29) has a lot of different stationary measures and it is rather difficult to predict a limit of a given solution. Assumption (9) allows us to omit this difficulty. Indeed, if a measure μ has a finite first moment q, then according to (8) and (9) Therefore, any solution μ t of Eq. (29) has the same first moment for all t ≥ 0. It means that we can restrict our consideration only to probability Borel measures with the same first moment.
The following example shows why we consider solutions of Eq. (29) with values in the space of probability Borel measures instead of the space of probability densities. In this example all the stationary solutions are the Dirac measures, and any solution converges in the weak sense to some stationary measure.
Example 1 Let Z be a random variable with values in the interval [−1, 1] such that EZ = 0 and |Z | ≡ 1. Assume that if x and y are parental traits, then the trait of an offspring is given by i.e., the trait of an offspring is distributed between the traits of parents according to the law of Z . For a random variable X we denote by m 1 (X ) and m 2 (X ) its first and second moments and by D(X ) its variance, i.e., D(X ) = m 2 (X ) − (m 1 (X )) 2 . Let μ t , t ≥ 0, be a solution of (29) with a finite second moment, and let X t , t ≥ 0, be random variables with distribution measures μ t . Thenx := m 1 (X t ) is a constant, and Since D(Z ) < 1, we have lim t→∞ D(X t ) = 0. Consequently, μ t converges weakly to δx .
The Wasserstein distance
In order to investigate asymptotic properties of the solutions, we recall some basic facts concerning the Wasserstein distance between measures. For α ≥ 1 we denote by M α the set of all probability Borel measures μ on F such that F |z| α μ(dz) < ∞ and by M α,q the subset of M α which contains all the measures such that F z μ(dz) = q. For any two measures μ, ν ∈ M 1 , we define the Wasserstein distance by the formula where Lip 1 is the set of all continuous functions f : F → R such that for any x, y ∈ F The following lemma is of a great importance in the subsequent part of the paper.
Lemma 1
The Wasserstein distance between measures μ, ν ∈ M 1 can be computed by the formula and if F is bounded from below or from above, then we have (x) = 0 for x / ∈ F. Since f is a locally absolutely continuous function, integrating by parts leads to the formula Clearly the supremum is taken when f (z) = − sgn (z).
Consider probability measures μ and μ n , n ∈ N, on the set F. We recall that the sequence (μ n ) converges weakly (or in a weak sens) to μ, if for any continuous and bounded function f : as n → ∞. It is well-known that the convergence in the Wasserstein distance implies the weak convergence of measures. Moreover, the space of probability Borel measures on any complete metric space is also a complete metric space with the Wasserstein distance (see e.g. Bolley 1934;Rachev 1991). The convergence of the sequence μ n to μ in the space M 1,q is equivalent to the following condition (see Villani (2008), Definition 6.7 and Theorem 6.8) μ n → μ weakly, as n → ∞ and lim for all μ ∈ M. Then the set M is relatively compact in M 1,q . Indeed, by Markov inequality, μ({x : |x| ≤ R}) ≥ 1 − m/R α for all μ ∈ M, what means that the set M is tight, and thus M is relatively compact in the topology of weak convergence (see e.g. Billingsley (1995)). Moreover, for μ ∈ M we have which implies the second condition in (C). Consequently, the set M is relatively compact in M 1,q .
Theorems on asymptotic stability
We use the script letter K for the cumulative distribution function of the measure K , i.e., The main result of this section is the following.
Theorem 3 Fix q ∈ F. Suppose that (i) for all y, z ∈ F the function K(x, y, z) is absolutely continuous with respect to x and for each a, b, y ∈ F we have (ii) there are constants α > 1, L < 1, and C ≥ 0 such that for every μ ∈ M α,q we have Then for every initial measure μ 0 ∈ M 1,q there exists a unique solution μ t , t ≥ 0, of Eq. (27) with values in M 1,q . Moreover, there exists a unique measure μ * ∈ M 1,q such that Pμ * = μ * and for every initial measure μ 0 ∈ M 1,q the solution μ t , t ≥ 0, of Eq. (27) converges to μ * in the space M 1,q .
We split the proof of Theorem 3 into a sequence of lemmas. Denote by F q the set of all cumulative distribution functions of the signed measures of the form μ − ν, where μ, ν ∈ M 1,q .
Lemma 2 Suppose that for all y ∈ F and ∈ F q , ≡ 0, we have Then for μ, ν ∈ M 1,q , μ = ν. In particular, for every initial measure μ 0 ∈ M 1,q there exists a unique solution μ t , t ≥ 0, of Eq. (27) with values in M 1,q .
Proof Since K (x, y, ·) = K (y, x, ·), we can write is the cumulative distribution function of μ − ν, then the signed measure Pμ − Pν has the cumulative distribution function of the form
Proof Since P(M 1,q ) ⊂ M 1,q every solution of (29) with an initial value from the set M 1,q remains in this set for all t ≥ 0. Any solution μ t of (29) satisfies the following integral equation Let μ t and ν t be solutions of (29) with values in M 1,q and such that μ T = ν T . Then μ t = ν t for t ≤ T and from (42) it follows that Then and from Gronwall's lemma it follows that α(t) < α(r )e t−r , which gives d(μ t , ν t ) < d(μ r , ν r ).
Lemma 5 Assume that condition (ii) of Theorem 3 is fulfilled. Then for every initial
Proof Take a μ 0 ∈ M α,q and let μ t be a solution of (29) with the initial condition μ 0 . Let m 0 = C/(1 − L), m α = F |x| α μ 0 (dx), and m = max{m 0 , m α }. We check that μ t ∈ M α,q for t ≥ 0 and To see this, we define the set Then is a contraction on the space C([0, T ], M 1,q ) and the function t → μ t , 0 ≤ t ≤ T , is its fixed point. In order to prove that μ t ∈ M α,q for t ≥ 0 and that (43) Thus, there exists m > 0 depending on μ 0 and α > 1 such that (43) holds. Consequently, the orbit is a relatively compact subset of M 1,q and cl O(μ 0 ) ⊂ M α,q .
We can strengthen the thesis of Theorem 3, if we additionally assume that for all x, y ∈ F the measure K (x, y, dz) has a density k (x, y, z) and k is a bounded and continuous function.
Theorem 4 Assume that k is a bounded and continuous function, and k satisfies assumptions of Theorem 3. Then the stationary measure μ * is absolutely continuous with respect to the Lebesgue measure and has a continuous and bounded density u * (x). Moreover, for every μ 0 ∈ M 1,q the solution μ t of Eq. (27) can be written in the form μ t = e −t μ 0 + ν t , where ν t are absolutely continuous measures, have continuous and bounded densities v t (x), which converge uniformly to u * (x).
Proof Since k is a continuous and bounded function and μ * is a probability measure, is a continuous bounded function and u * is a density of μ * because μ * is a fixed point of the operator P. For any initial measure μ 0 ∈ M 1,q the solution μ t of (27) satisfies the equation For each s ≥ 0 the measure Pμ s has a continuous and bounded densityū s (x). Since the function ϕ : [0, ∞) → M 1,q given by ϕ(s) = μ s is continuous and lim s→∞ μ s = μ * , the function ψ : [0, ∞) → C b (F) given by ψ(s) =ū s is continuous and lim s→∞ūs = u * . Thus the measures ν t = t 0 e s−t Pμ s ds have continuous and bounded densities v t (x) and v t converges uniformly to u * as t → ∞.
Examples
Now, we study two biologically reasonable forms of K , which satisfy conditions (i) and (ii) of Theorem 3.
Example 2 We suppose that if x and y are parental traits, then the trait of the offspring is of the form where Z is a 0-mean random variable, EZ 2 < ∞ and Z has a positive density h. Then and The condition (i) is equivalent to the inequality for all α, β ∈ R, which is a simple consequence of the assumption that h is a positive density. Now we check that condition (ii) holds with α = 2. We have If we additionally assume that the density h is a continuous function, then according to Theorem 4 the limit measure μ * has a continuous and bounded density u * , μ t = e −t μ 0 + v t (x) dx, and v t converges uniformly to u * . Now we determine the limiting distribution μ * . Densities of the measures μ * and μ 0 have the same first moment q and u * satisfies the equation Observe that if a probability density f satisfies (45) and R x f (x) dx = 0, then f (x − q) also satisfies (45) and has the first moment q. Since u * is a unique solution of (45) with the first moment q we have u Then EY = 0. If Y 1 and Y 2 are two independent copies of Y and Z is a random variable with density h independent of Y 1 and Y 2 , then for all α, β > 0. This inequality is a simple consequence of positivity of h on the interval (0, ε) and of the following condition Now we check that the condition (ii) holds with α = 2. We have where L = 2EZ 2 . Since 0 ≤ Z ≤ 1, we have L = 2EZ 2 < 2EZ = 1.
Remark 1
The kernel k in Example 3 is not a continuous function even if the density h is a continuous and we cannot apply directly Theorem 4 in this case. But it not difficult to check that if q > 0 then μ * ({0}) = 0 and to prove that the invariant measure μ * has a density u * , and u * is a continuous function on the interval (0, ∞). Moreover, repeating the proof of Theorem 4 one can check that μ t = e −t μ 0 + v t (x) dx, and v t converges uniformly to u * on the sets [ε, ∞), ε > 0. In particular, if we consider Eq. (29) on the space of probability densities, then every solution converges to u * in L 1 [0, ∞).
Conclusion
In the paper we presented some phenotype structured population models with a sexual reproduction. We consider both random and assortative mating. Our starting point is the individual-based model which clearly explains all interactions between individuals. The limit passage with the number of individuals to infinity leads to the macroscopic model which is a nonlinear evolution equation. We give some conditions which guarantee the global existence of solutions, persistence of the population and convergence of its size to some stable level. Next, we consider only a population with random mating and under suitable assumptions we prove that a phenotypic profile of the population converges to a stationary profile. It would be interesting to study analytically long-time behavior of a phenotypic profile of population with assortative mating. Some numerical results presented in the paper Doebeli et al. (2007) suggest that also in this case one can expect convergence of a phenotypic profile to multimodal limit distributions. This result suggests that assortative mating can lead to a polymorphic population and adaptive speciation. We hope that our methods invented to asymptotic analysis of populations with random mating will be also useful in the case of assortative mating. In order to do it, we probably need to modify the model of assortative mating (7) presented in Sect. 2, because it has a disadvantage that the mating rate does not satisfy the condition n j=1 m(x i , x j ) = 1 for all i. We can construct a new model which corresponds to the same preference function a(x, y) with a symmetric mating rate m which has the above property. In order to do this we look for constants c 1 , . . . , c n depending on the state of population such that m( and n j=1 m(x i , x j ; x) = 1 for all i. In this way we obtain a system of linear equations for c 1 , . . . , c n : n j=1 b i j c j = 1, for i = 1, . . . , n, where b i j = a(x i , x j ) for i = j and b ii = a(x i , x i )+ n l=1 a(x i , x l ). Since the matrix [b i j ] has positive entries and the dominated main diagonal, system (51) has a unique and positive solution. The passage with the number of individuals to infinity leads to the following mating rate m(x, y; μ) = (c(x; μ) + c(y; μ))a(x, y), where the function c(x; μ) depends on a phenotypic distribution μ, and satisfies the following Fredholm equation of the second kind c(x; μ) F a(x, y) μ(dy) + F c(y; μ)a(x, y) μ(dy) = 1.
One can introduce a general model which covers both semi-random and assortative mating. Let p(x) be the initial capability of mating of an individual with trait x and a(x, y) be a symmetric nonnegative preference function. Now we can define a cumulative preference function byā(x, y) = a(x, y) p(x) p(y). The mating rate m is a symmetric function given by (50) with a replaced byā and we assume that n j=1 m(x i , x j ; x) = p(x i ) for each i, j. The mating rate in an infinitesimal model is of the form m(x, y; μ) = (c(x; μ) + c(y; μ))a(x, y) p(x) p(y), where the function c(x; μ) satisfies the following equation c(x; μ) F a(x, y) p(y) μ(dy) + F c(y; μ)a(x, y) p(y) μ(dy) = 1.
In particular, in the semi-random case we have a ≡ 1 and c ≡ 1/ F 2 p(y)u(y) μ(dy) and the mating rate is given by (3). Let us recall that in the general case c is not only a function of x but it is also depends on μ and therefore, the proofs of results from Sects. 3 and 4 cannot be automatically adopted to these models.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. | 9,160.2 | 2014-05-16T00:00:00.000 | [
"Mathematics"
] |
Natural Convection Fluid Flow and Heat Transfer in a Valley-Shaped Cavity
: The phenomenon of natural convection is the subject of significant research interest due to its widespread occurrence in both natural and industrial contexts. This study focuses on investigating natural convection phenomena within triangular enclosures, specifically emphasizing a valley-shaped configuration. Our research comprehensively analyses unsteady, non-dimensional time-varying convection resulting from natural fluid flow within a valley-shaped cavity, where the inclined walls serve as hot surfaces and the top wall functions as a cold surface. We explore unsteady natural convection flows in this cavity, utilizing air as the operating fluid, considering a range of Rayleigh numbers from Ra = 10 0 to 10 8 . Additionally, various non-dimensional times τ , spanning from 0 to 5000, are examined, with a fixed Prandtl number (Pr = 0.71) and aspect ratio ( A = 0.5). Employing a two-dimensional framework for numerical analysis, our study focuses on identifying unstable flow mechanisms characterized by different non-dimensional times, including symmetric, asymmetric, and unsteady flow patterns. The numerical results reveal that natural convection flows remain steady in the symmetric state for Rayleigh values ranging from 10 0 to 7 × 10 3 . Asymmetric flow occurs when the Ra surpasses 7 × 10 3 . Under the asymmetric condition, flow arrives in an unsteady stage before stabilizing at the fully formed stage for 7 × 10 3 < Ra < 10 7 . This study demonstrates that periodic unsteady flows shift into chaotic situations during the transitional stage before transferring to periodic behavior in the developed stage, but the chaotic flow remains predominant in the unsteady regime with larger Rayleigh numbers. Furthermore, we present an analysis of heat transfer within the cavity, discussing and quantifying its dependence on the Rayleigh number.
Introduction
The study of natural convection within confined spaces has attracted considerable attention from scientists due to its widespread applicability across various disciplines [1,2].Researchers have utilized different types of enclosures with diverse boundary conditions as experimental setups to investigate natural convection, aiming to deepen our understanding of heat transfer mechanisms and fluid dynamics.This interest stems from the multitude of applications where natural convection phenomena are integral, spanning fields such as geophysics, building insulation, geothermal reservoir management, and industrial separation processes.However, the complexity of the Earth's surface, characterized by irregular geometries highly influenced by slanted terrains, poses challenges for traditional geometric configurations.Despite the prevalence of irregular shapes in both natural and industrial settings, natural convection remains a subject of ongoing study owing to its significance in understanding fluid flow behavior.While much research has focused on natural convection within conventional square or rectangular enclosures due to their simplicity [3,4], the investigation of natural convection within triangular-shaped cavities holds particular importance as it contributes significantly to our understanding of phenomena such as Rayleigh-Bénard convection [5,6].Recently, Cui et al. [7] focused on the study of mixed convection and heat transfer in an arc-shaped cavity with inner heat sources under the conditions of bottom heating and top wall cooling.Chuhan et al. [8] examined the thermal behavior of a power law fluid in a plus-shaped cavity as a result of natural convection, taking into account the Darcy number and magnetohydrodynamics. Sarangi et al. [9] explored the heat loss induced by radiation and persistently laminar natural convection in a solar cooker cavity with a rectangular or trapezoidal cavity.
Enhancing our understanding of natural convection flows holds paramount importance in improving predictions of heat transfer and facilitating flow control mechanisms.Researchers intrigued by this phenomenon can investigate detailed explanations of buoyancyinduced natural convection flows [10].Natural convection near a thermal boundary is often idealized as flow adjacent to a thermally well-conducted infinite or semi-infinite flat plate, leading to the formation of Rayleigh-Bénard convection.Studies by Manneville and Bodenschatz et al. [11,12] provide comprehensive insights into Rayleigh-Bénard instability findings, whereas Sparrow and Husar [13] focus on exploring Rayleigh-Bénard convection on inclined flat plates.Investigating cavities with vertical and horizontal temperature gradients reveals two distinct natural or free convection flow scenarios.While natural convection is predominantly studied in rectangular or square cavities due to their simplicity, Batchelor's [14] research offers a significant understanding of natural convection flows within differentially heated cavities, particularly emphasizing the influence of conduction on heat transfer, especially at low Rayleigh numbers.As the Rayleigh number exceeds a threshold, convection controls the flow dynamics.Early studies have primarily examined stable natural convection flows [15][16][17].The intriguing phenomenon of baroclinity, engendered by the thermal vertical wall, instigates spontaneous convection flows within the interior cavity through viscous shear under symmetric conditions [18].At lower Rayleigh numbers, the thermal boundary layer proximal to the wall remains steady, while convective instability induces discrete traveling waves as the Rayleigh numbers increase [18].Recent studies have delved into the dynamics and transient nature of transient natural convection flows, building upon earlier investigations [19,20].These findings contribute to our understanding of the complex interplay between thermal gradients and fluid dynamics within confined spaces, shedding light on the transient behaviors inherent to natural convection phenomena.Recent research indicates the occurrence of baroclinity-induced spontaneous convection flows and turbulent Rayleigh-Bénard convection [21].Rahaman et al. [22,23] explore transitional natural convection flows in a trapezoidal enclosure heated from below, supported by numerical investigations, further enriching our comprehension of this complex phenomenon.
Natural convection flows within cavities featuring inclined walls have attracted considerable attention due to their prevalence and ease of observation [24].Specifically, investigations into the natural convection flows within triangular cavities have revealed their significant enhancement of Rayleigh-Bénard convection, as documented in prior studies [5,6].Extensive research explores the characteristics and heat transfer mechanisms of natural convection flows within attic-shaped cavities characterized by temperature gradients between the bottom surface and the inclined borders [25][26][27][28][29][30][31].Notably, studies by Asan and Namli [25,26] and Salman [27] have examined the instability and bifurcations of natural convection flow solutions in triangular cavities, revealing empirical evidence showcasing the profound influence of the aspect ratio and Rayleigh number on both the temperature and flow fields.Analytical findings indicate that as the Rayleigh numbers increase, the aspect ratios decrease, leading to the occurrence of multiple vortex flow patterns.Poulikakos and Bejan [29] investigated heat transmission and natural convection flows within attic spaces during night or winter conditions, establishing scaling relationships and engaging in discussions regarding natural convection flow dynamics [30].Flack [31] elucidated heat transport dynamics using the Nusselt number-Rayleigh number relationship, while Holtzman et al. [32] observed the transition from symmetric to asymmetric flow patterns with increasing Grashof numbers, identifying a Pitchfork bifurcation based on experimental data.Furthermore, Lei et al. [33] provided evidence of the existence of transient natural convection flows within attic cavities, further enriching our understanding of this complex phenomenon.
The evolution of flow under sudden heating and cooling undergoes three distinct stages: early, transitional, and steady or quasi-steady [33].It has been observed that lower Rayleigh numbers result in heating across the entire flow zone rather than the splitting of the thermal boundary layer, with layer separation occurring as the Rayleigh numbers increase.Particularly in scenarios of rapid cooling, the thickness of the thermal boundary layer exceeds the vertical distance from the center of the inclined surface to the horizontal bottom, especially at lower Rayleigh numbers.The thermal boundary layer stabilizes prior to cavity cooling as the Rayleigh numbers increase, with fluid passing the sloping surface and contacting the bottom tip before entering the interior.Attic heat transmission has been studied under thermal stimuli by Saha et al. [34][35][36][37][38]. Additionally, researchers have investigated the behavior and heat transfer dynamics of natural convection flows in wedge-shaped cavities, serving as models for various shallow-water bodies with inclined bottom surfaces, such as reservoir sidearms and seashores.As described by , convective flow within wedges undergoes three stages from isothermal and stationary states.Mao et al. [42][43][44] argued that the horizontal location of the wedge determines the primary heat transfer mode and flow status for varying Rayleigh numbers, with different flow regimes observed in the shallow littoral zone.Bednarz et al. [45] explored natural convection flows within reservoir-shaped cavities, observing the formation of two heating boundary layers: inflow at the bottom and unstable reflux below the water for different Grashof values.Understanding natural convection flows in V-shaped cavities is crucial, given their prevalence in both natural and industrial systems [46][47][48][49][50][51][52][53][54][55][56][57].Kenjeres [58] quantified the heat transfer and turbulent natural convection fluxes in V-shaped cavities, noting that upslope flow increases with rising Rayleigh numbers.Kimura et al. [59] utilized angled V-shaped channel open chambers to calculate heat transfer dynamics.
The flow pattern and heat transfer quantities have been meticulously measured in previous studies [30,31].Despite the detailed examinations conducted, natural convection flows within an attic or a wedge-shaped triangular cavity may not accurately depict the flow dynamics in a V-shaped cavity over time.Given their natural occurrence, natural convection flows within V-shaped triangular enclosures with opposing boundary conditions have been deemed worthy of thorough investigation.Bhowmick et al. [60][61][62] investigated the transition from symmetric steady flow to asymmetric unsteady flow in a V-shaped triangular enclosure, where heating occurred from below and cooling from above.It was observed that different fluids exhibit distinct movement behaviors.Despite witnessing the transition from regular to chaotic flow in the valley-shaped cavity heated from below, the mechanisms driving unsteady flow in such cavities remain unclear.To the best of the authors' knowledge, no study has examined the unsteady flow structures present at various times in the valley-shaped cavity.Moreover, there is a need for further quantitative analysis of the dynamics of heat transfer in such configurations.
This literature review highlights the significance of investigating unsteady natural convection in V-shaped cavities for air, providing valuable insights into the flow patterns, transition processes, and heat transfer phenomena in such systems.However, a comprehensive understanding of the intricate physics governing unsteady flow mechanisms within V-shaped cavities remains imperative.Analysis of the existing literature reveals a dearth of studies focusing on characterizing unsteady flow structures at different non-dimensional times within V-shaped cavities.To address this gap, the present study aims to examine unsteady flow in a valley-shaped cavity through two-dimensional numerical simulations, considering air with Rayleigh numbers ranging from 10 0 to 10 8 , a Prandtl number of 0.71, and an aspect ratio of 0.5.The impact of various Ra and τ values on the flow structure, heat transfer, and transient flow characteristics in the fully formed stage of the valleyshaped cavity are covered in this study.This research contributes to existing knowledge by enhancing our understanding of how fluid characteristics affect flow behavior and transition processes.The evolution of flow mechanisms and changes within the cavity are elucidated using the temperature and velocity differences over time.Such insights are crucial for enhancing the energy efficiency of engineering applications and optimizing the design of thermal systems like heat exchangers and cooling systems.Consequently, this study is poised to make a novel contribution to the field, offering distinct and valuable conclusions that will benefit specialists engaged in modeling and experimentation on flow over complex geometries.
Numerical Model Formulations
The primary objective of this study is to analyze the characteristics of natural convection flows in a triangular cavity with a valley-shaped configuration.To achieve this, a two-dimensional numerical simulation approach is employed.The physical model and its corresponding boundary conditions are visually represented in Figure 1.In order to rectify the singularity present at the connection between the top and inclined walls, a strategic adjustment is made by removing an appropriate amount of substance from both the top corners.Specifically, 4% of the length is carefully extracted in the form of minuscule points.It should be noted that this minor adjustment does not exhibit any discernible impact on the mechanics of fluid flow and heat transfer, as indicated by previous studies [18,[20][21][22][23][24][29][30][31].The dimensions of the cavity are defined as follows: the horizontal length is 2L, and the height is H, where L = 2H and the ratio A = H/L = 0.5.At time T = T 0 , the fluid inside the cavity is initially at a uniform temperature and not in motion.At a given temperature of T c = T 0 − ∆T/2 and T h = T 0 + ∆T/2, respectively, the top and inclined walls undergo instantaneous cooling and heating processes.In every given circumstance, all the boundaries are motionless.the valley-shaped cavity are covered in this study.This research contributes to existing knowledge by enhancing our understanding of how fluid characteristics affect flow behavior and transition processes.The evolution of flow mechanisms and changes within the cavity are elucidated using the temperature and velocity differences over time.Such insights are crucial for enhancing the energy efficiency of engineering applications and optimizing the design of thermal systems like heat exchangers and cooling systems.Consequently, this study is poised to make a novel contribution to the field, offering distinct and valuable conclusions that will benefit specialists engaged in modeling and experimentation on flow over complex geometries.
Numerical Model Formulations
The primary objective of this study is to analyze the characteristics of natural convection flows in a triangular cavity with a valley-shaped configuration.To achieve this, a twodimensional numerical simulation approach is employed.The physical model and its corresponding boundary conditions are visually represented in Figure 1.In order to rectify the singularity present at the connection between the top and inclined walls, a strategic adjustment is made by removing an appropriate amount of substance from both the top corners.Specifically, 4% of the length is carefully extracted in the form of minuscule points.It should be noted that this minor adjustment does not exhibit any discernible impact on the mechanics of fluid flow and heat transfer, as indicated by previous studies [18,[20][21][22][23][24][29][30][31].The dimensions of the cavity are defined as follows: the horizontal length is 2L, and the height is H, where L = 2H and the ratio A = H/L = 0.5.At time T = T0, the fluid inside the cavity is initially at a uniform temperature and not in motion.At a given temperature of Tc = T0 − ΔT/2 and Th = T0 + ΔT/2, respectively, the top and inclined walls undergo instantaneous cooling and heating processes.In every given circumstance, all the boundaries are motionless.This study analyses two-dimensional natural convection flows within a valleyshaped enclosure.The governing equations employed in this investigation are presented below, utilizing the Boussinesq approximation as a simplifying assumption [62].This study analyses two-dimensional natural convection flows within a valley-shaped enclosure.The governing equations employed in this investigation are presented below, utilizing the Boussinesq approximation as a simplifying assumption [62].
∂U ∂X
U and V in the equations, respectively, represent the horizontal and the vertical flow rates.For a two-dimensional coordinate system: • P is the pressure; • ρ is the density; • t is the time; • T is the temperature, at time T 0 , where T 0 = (T c + T h )/2, the fluid medium in the triangular cavity is isothermally stationary.
The dimensionless variables utilized are as follows: In the aforementioned equations, the variables v, x, y, p, τ, and θ represent the normalized counterparts of U, V, X, Y, P, t, and T, respectively.The three governing variables, the aspect ratio (A), Prandtl number (Pr), and Rayleigh number (Ra), have an impact on the enclosure's natural convectional flows (view [6] for further details), which are best described as follows: The above dimensionless variables are added, and then Equations ( 1)-( 4) become (for more information, see [61]): ∂u ∂x The present study used a finite-volume Navier-Stokes solver [62], Fluent 15.0 in ANSYS, to model natural convection within a valley-shaped cavity.The principles of mass conservation, momentum conservation, and energy conservation are the foundations of the computational model that is used to simulate unsteady natural convection flow in an air cavity.The continuity equation, the Navier-Stokes equations, and the energy equations have all been solved using finite volume methods after the appropriate numerical techniques have been applied.The non-uniform 2D mesh system is created with the help of the commercial software ICEM 15.0, and the numerical results have been displayed graphically with the help of the post-processing program TECPLOT 360.
The governing Equations ( 7)- (10) are solved by means of a finite volume technique using the SIMPLE method, which is thoroughly explained in reference [63] and Saha [64].The numerical procedure for the SIMPLE method is shown in Figure 2.
Grid Dependency Test
The mesh and time step dependence of the largest Rayleigh number (Ra = 10 8 ) for the Prandtl number (Pr = 0.71) and aspect ratio (A = 0.5) was investigated in the present inquiry.Three symmetrical, non-uniform meshes were employed in the test, with dimensions of 600 × 100, 800 × 150, and 1200 × 200.These meshes were designed to have finer grids near the boundaries and coarser grids in the inner zone.The 800 × 150 mesh underwent a 3% expansion, ranging from a minimum width of 0.00025 near the wall to a maximum width of 0.02 in the inside.Figure 3 illustrates the time series of the Nusselt number at the right wall of the cavity.The data were obtained using different meshes and time steps for a Rayleigh number of 10 8 .The Nusselt numbers obtained from the different meshes and time steps show uniformity during the initial stage, along with some divergence during the mature stage.In addition, Table 1 provides a calculation and list of the average Nusselt numbers during the fully developed stage.Based on the analysis conducted, it has been determined that the variations in outcomes observed across the different meshes and time steps are within the acceptable threshold of 2%.Taking this into consideration, the numerical simulation in this study used a mesh size of 800 × 150 and a time step of 0.0025.
Grid Dependency Test
The mesh and time step dependence of the largest Rayleigh number (Ra = 10 8 ) for the Prandtl number (Pr = 0.71) and aspect ratio (A = 0.5) was investigated in the present inquiry.Three symmetrical, non-uniform meshes were employed in the test, with dimensions of 600 × 100, 800 × 150, and 1200 × 200.These meshes were designed to have finer grids near the boundaries and coarser grids in the inner zone.The 800 × 150 mesh underwent a 3% expansion, ranging from a minimum width of 0.00025 near the wall to a maximum width of 0.02 in the inside.Figure 3 illustrates the time series of the Nusselt number at the right wall of the cavity.The data were obtained using different meshes and time steps for a Rayleigh number of 10 8 .The Nusselt numbers obtained from the different meshes and time steps show uniformity during the initial stage, along with some divergence during the mature stage.In addition, Table 1 provides a calculation and list of the average Nusselt numbers during the fully developed stage.Based on the analysis conducted, it has been determined that the variations in outcomes observed across the different meshes and time steps are within the acceptable threshold of 2%.Taking this into consideration, the numerical simulation in this study used a mesh size of 800 × 150 and a time step of 0.0025.
Validation
Figure 4 compares the laboratory experiment conducted by Holtzman et al. [32] with the current numerical results for additional confirmation.In contrast to the triangular cavity in this study, it is vertically inverted and the Rayleigh numbers have been employed in place of the corresponding Grashof numbers.In comparison to the experimental results in Figure 4b for the Rayleigh numbers Ra = 3.5 × 10 3 , Figure 4d for Ra = 7 × 10 3 , and Figure 4f for Ra = 7 × 10 4 , the numerical results depicting the symmetric flow for Ra = 7 × 10 3 in Figure 4a and the asymmetric flow for Ra = 1.2 × 10 4 , and Ra = 10 5 in Figure 4c,e are accurate.As indicated by Xu et al. [18] and Patterson and Armfield [20], the presence of inconsistencies between numerical Ra values and experimental Ra findings gives rise to a notable contradiction: if the numerical findings and experiment results are accurate, the experimental Rayleigh number is approximately 1.5 to 2.5 times the numerical Rayleigh number.The numerical methods utilized in this research can be used to depict a transitional flow in a triangular cavity, as the experimental and computational results are identical.
Validation
Figure 4 compares the laboratory experiment conducted by Holtzman et al. [32] with the current numerical results for additional confirmation.In contrast to the triangular cavity in this study, it is vertically inverted and the Rayleigh numbers have been employed in place of the corresponding Grashof numbers.In comparison to the experimental results in Figure 4b for the Rayleigh numbers Ra = 3.5 × 10 3 , Figure 4d for Ra = 7 × 10 3 , and Figure 4f for Ra = 7 × 10 4 , the numerical results depicting the symmetric flow for Ra = 7 × 10 3 in Figure 4a and the asymmetric flow for Ra = 1.2 × 10 4 , and Ra = 10 5 in Figure 4c,e are accurate.As indicated by Xu et al. [18] and Patterson and Armfield [20], the presence of inconsistencies between numerical Ra values and experimental Ra findings gives rise to a notable contradiction: if the numerical findings and experiment results are accurate, the experimental Rayleigh number is approximately 1.5 to 2.5 times the numerical Rayleigh number.The numerical methods utilized in this research can be used to depict a transitional flow in a triangular cavity, as the experimental and computational results are identical.
Numerical Results and Discussion
The subsequent section provides an explanation of the primary characteristics of the fluid flows within a V-shaped cavity, wherein the flow is stimulated by heat from the inclined walls and dissipated by cooling from the top wall.The analysis focuses on a range of Rayleigh numbers, spanning from Ra = 10 0 to 10 8 , a Prandtl number, Pr = 0.71, and an aspect ratio, A = 0.5.Various non-dimensional times are considered to comprehensively examine the flow growth phenomenon.In this work, a 2D numerical simulation has been
Numerical Results and Discussion
The subsequent section provides an explanation of the primary characteristics of the fluid flows within a V-shaped cavity, wherein the flow is stimulated by heat from the inclined walls and dissipated by cooling from the top wall.The analysis focuses on a range of Rayleigh numbers, spanning from Ra = 10 0 to 10 8 , a Prandtl number, Pr = 0.71, and an aspect ratio, A = 0.5.Various non-dimensional times are considered to comprehensively examine the flow growth phenomenon.In this work, a 2D numerical simulation has been conducted.According to the numerical simulations, the evolution of the flow for these Rayleigh numbers in different times from τ = 0 to 5000 may be split into symmetric flow, asymmetric flow and unsteady flow.
To produce the simulation results, we used high-performance computing facilities with 64 processors, with which it took 15 days for a small Ra = 10 5 and 20 days for a large Ra = 10 8 for non-dimensional time, τ = 5000 with time step size, ∆τ = 0.0025.
Symmetric Flow
It was discovered that for Ra = 10 0 , 10 1 and 10 2 , the transitional flow development with non-dimensional time did not exhibit an increasing or decreasing plume, which indicates that for those Rayleigh numbers, the flow exhibited an ongoing level of stability due to the prevailing influence of conduction dominance.In this study, the results for Ra = 10 0 , 10 1 and 10 2 at τ = 0, 0.1, 0.5, 1 and 2000 are included in Figure 5. Initially, at τ = 0, the temperature is constant, and at this time, there is no fluid flow in the cavity.Fluid flows develop when the inclined walls are suddenly heated and the top wall is suddenly cooled.In Figure 5, the temperature rapidly changes for a small Ra = 10 0 and 10 1 at τ = 0.1 and becomes symmetric, but for Ra = 10 2 , it becomes symmetric at τ = 0.5 and this symmetric tendency is observed from τ = 0.5 to 2000.For all the Ra and τ values, the cavity contains a pair of symmetrical cells.Convective flows exhibit relatively low magnitudes, with the cells having a prevailing conduction dominance that provides stability.
Asymmetric Flow
The flow of different Rayleigh numbers of Ra = 10 3 , 7 × 10 3 and 2 × 10 4 is shown in Figure 6 on the basis of the time with the displayed streamlines and isotherms.Initially, at τ = 0, the temperature is constant and no fluid moves in the cavity.Fluid flows start
Asymmetric Flow
The flow of different Rayleigh numbers of Ra = 10 3 , 7 × 10 3 and 2 × 10 4 is shown in Figure 6 on the basis of the time with the displayed streamlines and isotherms.Initially, at τ = 0, the temperature is constant and no fluid moves in the cavity.Fluid flows start when the inclined walls are suddenly heated and the top wall is suddenly cooled.In Figure 6, the temperature gradually changes for Ra = 10 3 and 7 × 10 3 at τ = 0.1 and becomes symmetric, whereas for Ra = 2 × 10 4 , it becomes symmetric at τ = 1.In the initial stage, when τ = 1, the steady flows for all the Rayleigh numbers are symmetric.The cavity has two symmetric cells.Since the cells have a conduction dominance that keeps them eternally stable, convective flows are really extremely low.The baroclinity near the inclined walls produces viscous shear, which induces natural convection flows in the fundamental symmetric state, but as time passes, the flow for Ra = 7 × 10 3 and 2 × 10 4 starts becoming gradually asymmetric.As the figure shows, in the initial transitional stage at τ = 1, the convection in Ra = 7 × 10 3 and 2 × 10 4 is increased, although it is not sufficiently strong to disrupt the symmetric flow structure (see Figure 7), while the convection in Ra = 10 3 remains static.Though there are only two cells when Ra = 10 3 and Ra = 7 × 10 3 , four cells occur in the cavity for Ra = 2 × 10 4 at τ = 10.That is, aside from symmetric break, the Rayleigh number causes an increase in the number of cells in the cavity.It should be noted that depending on the initial perturbations, any one of the two cells may be bigger and advance into the cavity.The convection of Ra = 10 3 is found to be constant and weak in all the subsequent stages of time, like τ = 100, 1000, 2000 and 4000, but the convection in Ra = 7 × 10 3 becomes stronger slowly in τ = 100, 1000, 2000 and 4000, though it is not enough to break the symmetric state.On the other hand, for Ra = 2 × 10 4 , the convection grows so strongly that it breaks the symmetrical structure and creates more than two new cells, which is evident at τ = 100, and the flow becomes asymmetric (in Figure 7).Six cells are present for Ra = 2 × 10 4 at τ = 100 and 1000.Depending on the initial perturbations, one of the two cells at τ = 1000 becomes larger and dominates the other cell in the cavity, moving toward the cavity's center.At τ = 2000, the number of cells decreases to five, and this trend persists until τ = 4000.Upon careful examination, it is evident that during the further development stage, the asymmetric flow has attained a state of equilibrium commonly referred to as a steady-state situation.The transition from a symmetric to an asymmetric state, occurring within the range of Ra = 7 × 10 3 to Ra = 2 × 10 4 , can be understood as a supercritical Pitchfork bifurcation.This transformation is driven by the onset of Rayleigh-Bénard instability.It is seen that there is more circulation of cells and that the flow pattern is becoming continually asymmetric.
Figure 7 depicts the time series of the temperature and velocity for Ra = 10 3 , Ra = 7 × 10 3 , and Ra = 2 × 10 4 at different points P 1 , P 2 , and P 5 .The fluid within the valley-shaped cavity initially exhibits isothermal and stationary behavior.The cooling and heating of the top wall and the inclined walls occur simultaneously, with a non-dimensional temperature denoted as θ c for the top wall and θ h for the inclined walls.Figure 7a demonstrates that there is no difference in temperature between any of the points P 1 and P 2, and temperature is different at point P 5 in the early stage for a small Ra = 10 3 .As a result of the passage of time, the temperature increases and decreases at the three different points in the transitional period.Finally, there is no temperature fluctuation in the completely developed stage, and then the flow becomes steady.For more accuracy of the flow steadiness, here, we have used non-dimensional times from 0 to 5000.In Figure 7d, initially, the velocity is zero; as the time increases, the velocity remains the same at points P 1 and P 2 ; and at point P 5 , the velocity increases and finally becomes constant.Similar consequences have been seen in Figure 7b,e.The figures clearly indicate that the flow within the cavity is in a state of steady symmetry.From this, it would seem that the flow is always steady at symmetric states.Now, in Figure 7c, the temperature decreases suddenly and then increases at points P 1 and P 2 during the transitional period and finally stabilizes at the completely developed stage.At point P 5 , there are no changes because of its position.However, the velocity (Figure 7f)
Pitchfork Bifurcation
Figure 8 illustrates the isotherms and streamlines corresponding to a Rayleigh number of Ra = 10 4 .Figure 8a demonstrates the continued presence of clear symmetry in the flow at Ra = 7 × 10 3 .At Ra = 10 4 and Ra = 1.3 × 10 4 , the flow shows asymmetry, as depicted in Figure 8b,c.It is to be noted that depending on the initial perturbations, any one of the two cells might grow bigger and move toward the cavity.When the Rayleigh number reaches a value of Ra = 1.3 × 10 4 , as depicted in Figure 8c, an additional cell emerges in the top-right region of the cavity (depending on the initial perturbations), and such an asymmetric flow configuration becomes more clearly evident as being the value of the Rayleigh number increases.That is, aside from the symmetric break, the Rayleigh number is responsible for the growing number of cells within the cavity.For instance, Figure 8b
Pitchfork Bifurcation
Figure 8 illustrates the isotherms and streamlines corresponding to a Rayleigh number of Ra = 10 4 .Figure 8a demonstrates the continued presence of clear symmetry in the flow at Ra = 7 × 10 3 .At Ra = 10 4 and Ra = 1.3 × 10 4 , the flow shows asymmetry, as depicted in Figure 8b,c.It is to be noted that depending on the initial perturbations, any one of the two cells might grow bigger and move toward the cavity.When the Rayleigh number reaches a value of Ra = 1.3 × 10 4 , as depicted in Figure 8c, an additional cell emerges in the top-right region of the cavity (depending on the initial perturbations), and such an asymmetric flow configuration becomes more clearly evident as being the value of the Rayleigh number increases.That is, aside from the symmetric break, the Rayleigh number is responsible for the growing number of cells within the cavity.For instance, Figure 8b Table 2 presents the x-velocity at point P 2 (0, 0.46) for various Rayleigh numbers (Ra = 10 1 to 10 6 ) to comprehend the transition from a symmetric to an asymmetric state during the fully developed stage (τ = 2000) of the Pitchfork bifurcation.For Ra values less than or equal to 7.5 × 10 3 , the x-velocity is nearly zero due to the symmetry of the flow.Once the Rayleigh number reaches or exceeds 7.6 × 10 3 , the cell's polarity shifts to positive when the x-velocity increases and negative when the x-velocity decreases.±0.0993 At the initial stage at τ = 0, the temperature is constant, as depicted in Figure 9.The figure indicates that the flows are symmetric and continuous in the early stage at τ = 1 for Ra = 5 × 10 4 , Ra = 10 5 and Ra = 10 6 .The Pitchfork bifurcation causes the flow to intensify over time and eventually become asymmetric.The graphs show the asymmetric streamlines and isotherms at Rayleigh numbers Ra = 5 × 10 4 , Ra = 10 5 and Ra = 10 6 at the transitional stages (τ = 100 and 800).It is undoubtedly interesting that the flow oscillates for a considerable amount of time with a higher Rayleigh number.In this figure, for all the Ra values, the flows in the valley-shaped cavity are asymmetric and steady with the passing of time.For a higher Ra = 10 6 , the flow becomes asymmetric and steady at τ = 800, whereas Ra = 5 × 10 4 and Ra = 10 5 take more time to reach the asymmetric steady-state situation.Clearly, the temperature and velocity effect on the flow in the cavity with time, as has been calculated in Figure 10 for more precision at a range of Rayleigh numbers and various points.Figure 10a,b demonstrate that, at first, the temperatures are uniform at points P 1 and P 2 , but point P 5 is located close to the heated wall (see Figure 1), which causes it to heat up more quickly than the other points in the early stage.As time goes on, the temperature in the transitional stage first drops and then rises, ultimately achieving its constant point in the fully developed stage, when there are no longer any temperature variations.Also, from Figure 10d,e, it is clearly seen that the velocity increases with the passing of time.The velocity overshoots in the transitional period before becoming stable in the fully developed stage.Figure 10c shows that the temperature increases in the transitional stage and becomes stable in the fully developed stage.In contrast, the velocity in Figure 10f
Other Bifurcations
For higher Rayleigh numbers, the asymmetric isotherms and streamlines are depicted in Figure 11.It is evident that when the Rayleigh number increases, several types of further bifurcations occur, leading to an increase in the number of cells, as seen in Figure 11.In Figure 11a, there are two cells in the cavity.With the increase in the Ra, we see in Figure 11b that the cell number becomes four; in Figure 11c, the cell number is five.For instance, the cell number increases from six at a Rayleigh number of 5 × 10 4 in Figure 11d to more than six, as shown in Table 3. Figure 11 demonstrates that when the Rayleigh number rises, more new tiny cells develop in the left or right side of the cavity.However, a single large cell consistently persists at the center of the cavity.This indicates that the flow configuration within the cavity becomes increasingly complex for an asymmetric steady state when the Rayleigh number rises.(f) . Times series of the temperature and velocity at three distinct points, P 1 (0, 0.825), P 2 (0, 0.46), and P 5 (0.5, 0.255), for (a,d) when Ra = 5 × 10 4 , for (b,e) when Ra = 10 5 , and for (c,f) when Ra = 10 6 .
Other Bifurcations
For higher Rayleigh numbers, the asymmetric isotherms and streamlines are depicted in Figure 11.It is evident that when the Rayleigh number increases, several types of further bifurcations occur, leading to an increase in the number of cells, as seen in Figure 11.In Figure 11a, there are two cells in the cavity.With the increase in the Ra, we see in Figure 11b that the cell number becomes four; in Figure 11c, the cell number is five.For instance, the cell number increases from six at a Rayleigh number of 5 × 10 4 in Figure 11d to more than six, as shown in Table 3. Figure 11 demonstrates that when the Rayleigh number rises, more new tiny cells develop in the left or right side of the cavity.However, a single large cell consistently persists at the center of the cavity.This indicates that the flow configuration within the cavity becomes increasingly complex for an asymmetric steady state when the Rayleigh number rises.Table 3 shows that the Rayleigh number and the number of cells are linked.The Rayleigh number increased from 7 × 10 3 to 10 7 as the number of bifurcations increased from two to fifteen cells.For Ra values between 10 4 and 10 7 , it is clear that there is a nearly linear link between the number of cells and the Ra.Table 3 shows that the Rayleigh number and the number of cells are linked.The Rayleigh number increased from 7 × 10 3 to 10 7 as the number of bifurcations increased from two to fifteen cells.For Ra values between 10 4 and 10 7 , it is clear that there is a nearly linear link between the number of cells and the Ra.
Unsteady Flow
An asymmetric flow structure is created as a consequence of a Pitchfork bifurcation that arises during the later stages of the transitional period.As already noted, the numerical simulation experiences a Pitchfork bifurcation early on.Figure 12 displays the streamlines and isotherms for Ra = 10 7 , Ra = 5 × 10 7 , and Ra = 10 8 to better study the flow at greater Rayleigh numbers for different times.In Figure 12, initially at τ = 0, the temperature is constant for all the Ra values.The figure depicts that the fluids are symmetric for all the Rayleigh numbers when τ = 1, but as time passes, when τ = 5, they become asymmetric and create two additional tiny cells in the cavity at the right and left upper corners.With increasing cell numbers, all the Ra values at τ = 10 in Figure 12 exhibit the same flow pattern.When τ = 50 for Ra = 10 7 , a tiny cell appears in the top center of the two largest cells in the cavity, but for Ra = 5 × 10 7 and Ra = 10 8 at τ = 50, the steady flow breaks and becomes unsteady.In Figure 11, the flow is more convoluted for Ra = 5 × 10 7 and Ra = 10 8 at τ = 50, even though it is still constant, and the biggest cell has a few smaller cells at its upper right and left side.This indicates that between Ra = 10 7 and 5 × 10 7 , there is a Hopf bifurcation (see [19] for details on the bifurcation).When Ra = 10 7 , the asymmetric steady state remains similar at all the subsequent transitional, developed transitional and fully developed stages when τ = 100, 1000, 1500 and 2000, respectively.The observed trends remain consistent during both the developed transitional stage (τ = 1000 and 1500) and the completely developed stage (τ = 2000) for two distinct Rayleigh numbers, namely Ra = 5 × 10 7 and Ra = 10 8 .But as τ rises, as shown in Figure 12 for Ra = 10 8 at τ = 2000, both cells grow in the middle of the two largest cells.The largest central cell exhibits a right-to-left movement.The unsteady flow becomes more and more complex due to its lack of stability.
Over time, the temperature series has been followed and analyzed spectrally to gain an understanding of the unsteady flow at higher Rayleigh numbers.Figure 13 displays the temperature variation over time and the power spectral densities according to various Ra values.Figure 13a demonstrates a steady flow throughout the fully established stage for Ra = 10 7 , while Figure 13b exhibits periodic flow for Ra = 5 × 10 7 .Additionally, Figure 13c displays the temperature power spectral density located in Figure 13b.The periodic flow's fundamental frequency, with harmonic modes, is f p = 0.397.The periodic flow changes as the Rayleigh number rises.This indicates the occurrence of one more bifurcation for which a periodic solution transforms into another.As the Rayleigh number continues to rise, the unstable flow becomes chaotic, as seen in Figure 13d, which depicts the fully developed stage for Ra = 10 8 .According to Figure 13e, the unique frequency with harmonic modes vanishes for Ra = 10 8 , while the flow is chaotic but unstable at the fully developed stage.Over time, the temperature series has been followed and analyzed spectrally to gain an understanding of the unsteady flow at higher Rayleigh numbers.Figure 13 displays the temperature variation over time and the power spectral densities according to various Ra values.Figure 13a demonstrates a steady flow throughout the fully established stage changes as the Rayleigh number rises.This indicates the occurrence of one more bifurcation for which a periodic solution transforms into another.As the Rayleigh number continues to rise, the unstable flow becomes chaotic, as seen in Figure 13d, which depicts the fully developed stage for Ra = 10 8 .According to Figure 13e, the unique frequency with harmonic modes vanishes for Ra = 10 8 , while the flow is chaotic but unstable at the fully developed stage.Figure 14 depicts a time series analysis of the temperature as well as the velocity at three distinct points, P1, P2, and P5, with higher Rayleigh numbers, specifically Ra = 10 7 , 5 × 10 7 and 10 8 .Because the temperature is initially isothermal and constant in Figure 14ac, it is the same at various stages in the early stage for greater Rayleigh numbers.In the transitional period, the temperature fluctuates with the time for each Ra in Figure 14.In the advanced stage of development, as Figure 14a shows, the temperature becomes constant; that is, there are no changes in the temperature, but in Figure 14b, we see that it becomes periodic.In Figure 14c, with the time increases, the flow becomes chaotic and more complex in the fully developed stage.However, the velocity (Figure 14d-f) starts off at zero and changes as time moves on during the transitional stage.Figure 14d-f all show that the flow is stable in the developed stage, periodic in Figure 14e, and chaotic in Figure Figure 14 depicts a time series analysis of the temperature as well as the velocity at three distinct points, P1, P2, and P5, with higher Rayleigh numbers, specifically Ra = 10 7 , 5 × 10 7 and 10 8 .Because the temperature is initially isothermal and constant in Figure 14a-c, it is the same at various stages in the early stage for greater Rayleigh numbers.In the transitional period, the temperature fluctuates with the time for each Ra in Figure 14.In the advanced stage of development, as Figure 14a shows, the temperature becomes constant; that is, there are no changes in the temperature, but in Figure 14b, we see that it becomes periodic.In Figure 14c, with the time increases, the flow becomes chaotic and more complex in the fully developed stage.However, the velocity (Figure 14d-f) starts off at zero and changes as time moves on during the transitional stage.Figure 14d-f all show that the flow is stable in the developed stage, periodic in Figure 14e, and chaotic in Figure 14f.In addition, most importantly, it seems that the flow of a periodic state changes from a periodic one to a chaotic one at a stage of transition.Furthermore, in a chaotic state, the flow is constantly chaotic at the transitional stage and at the developed stages.
14f.In addition, most importantly, it seems that the flow of a periodic state changes from a periodic one to a chaotic one at a stage of transition.Furthermore, in a chaotic state, the flow is constantly chaotic at the transitional stage and at the developed stages.Ra = 10 7 Ra = 5 × 10 7 Ra = 10 8 Figure 14.Times series of the temperature and velocity at three distinct points, P1 (0, 0.825), P2 (0, 0.46) and P3 (0.5, 0.255), for (a,d) when Ra = 10 7 , for (b,e) when Ra = 5 × 10 7 , and for (c,f) when Ra = 10 8 .
Hopf Bifurcation
Figure 15 displays the streamlines and isotherms for Ra = 10 7 and Ra = 2 × 10 7 to allow for a more in-depth analysis of the flow in the case of greater Rayleigh numbers.Figure 15a illustrates the more complicated flows throughout the fully developed stage for Ra = 10 7 , even though it is still steady.However, a deeper inspection of the numerical results reveals that more than two tiny cells alternately occur; as a result, the flow becomes unsteady for Ra = 2 × 10 7 in the fully developed stage.This indicates that there is a Hopf bifurcation between Ra = 10 7 and 2 × 10 7 .(f) Figure 14.Times series of the temperature and velocity at three distinct points, P 1 (0, 0.825), P 2 (0, 0.46) and P 3 (0.5, 0.255), for (a,d) when Ra = 10 7 , for (b,e) when Ra = 5 × 10 7 , and for (c,f) when Ra = 10 8 .
Hopf Bifurcation
Figure 15 displays the streamlines and isotherms for Ra = 10 7 and Ra = 2 × 10 7 to allow for a more in-depth analysis of the flow in the case of greater Rayleigh numbers.Figure 15a illustrates the more complicated flows throughout the fully developed stage for Ra = 10 7 , even though it is still steady.However, a deeper inspection of the numerical results reveals that more than two tiny cells alternately occur; as a result, the flow becomes unsteady for Ra = 2 × 10 7 in the fully developed stage.This indicates that there is a Hopf bifurcation between Ra = 10 7 and 2 × 10 7 .
14f.In addition, most importantly, it seems that the flow of a periodic state changes from a periodic one to a chaotic one at a stage of transition.Furthermore, in a chaotic state, the flow is constantly chaotic at the transitional stage and at the developed stages.Ra = 10 7 Ra = 5 × 10 7 Ra = 10 8 Figure 14.Times series of the temperature and velocity at three distinct points, P1 (0, 0.825), P2 (0, 0.46) and P3 (0.5, 0.255), for (a,d) when Ra = 10 7 , for (b,e) when Ra = 5 × 10 7 , and for (c,f) when Ra = 10 8 .
Hopf Bifurcation
Figure 15 displays the streamlines and isotherms for Ra = 10 7 and Ra = 2 × 10 7 to allow for a more in-depth analysis of the flow in the case of greater Rayleigh numbers.Figure 15a illustrates the more complicated flows throughout the fully developed stage for Ra = 10 7 , even though it is still steady.However, a deeper inspection of the numerical results reveals that more than two tiny cells alternately occur; as a result, the flow becomes unsteady for Ra = 2 × 10 7 in the fully developed stage.This indicates that there is a Hopf bifurcation between Ra = 10 7 and 2 × 10 7 .Figure 16 shows the attractors with values ranging from τ = 300 to 2000 for Ra = 10 7 and τ = 1000 to 1500 for Ra = 5 × 10 7 at the defining point P 1 (0, 0.825) in order to facilitate an understanding of the Hopf bifurcation, which takes place during the transition from the steady state to the periodic stage.Figure 16a illustrates that the u-θ plane curve reaches a specific value when Ra = 10 7 .Figure 16b displays a limit cycle for Ra = 5 × 10 7 .As a result, when Ra = 5 × 10 7 , a Hopf bifurcation takes place (for further information on the Hopf bifurcation, see [65]).
the steady state to the periodic stage.Figure 16a illustrates that the u-θ plane curve reaches a specific value when Ra = 10 7 .Figure 16b displays a limit cycle for Ra = 5 × 10 7 .As a result, when Ra = 5 × 10 7 , a Hopf bifurcation takes place (for further information on the Hopf bifurcation, see [65]).
Chaotic
Figure 17a illustrates that the two cells in the upper right portion of the biggest cell become bigger precisely as the Rayleigh number increases.The biggest cell in the center likewise travels across the right as well as the left sides when Ra = 10 8 , as seen in Figure 17b.The unsteady flow becomes increasingly complicated, which is known as chaotic.Figure 18 shows the trajectories through the stage space in the u-θ plane at point P5 (0.5, 0.255) for Ra = 5 × 10 7 and 10 8 in order to more clearly illustrate how the periodic condition changes to a chaotic state.The observed limit cycle depicted in Figure 18a demonstrates the periodic nature of the unsteady flow at a Rayleigh number of Ra = 5 × 10 7 .This finding aligns with the information presented in Figure 16 and further supports the existence of a limit cycle.In Figure 18b, the trajectory for Ra = 10 8 shows that the periodic flow changes to chaotic, which takes place between Ra = 5 × 10 7 and 10 8 .For a detailed explanation of the stage-space trajectories, see [66].
Chaotic
Figure 17a illustrates that the two cells in the upper right portion of the biggest cell become bigger precisely as the Rayleigh number increases.The biggest cell in the center likewise travels across the right as well as the left sides when Ra = 10 8 , as seen in Figure 17b.The unsteady flow becomes increasingly complicated, which is known as chaotic.
the steady state to the periodic stage.Figure 16a illustrates that the u-θ plane curve reaches a specific value when Ra = 10 7 .Figure 16b displays a limit cycle for Ra = 5 × 10 7 .As a result, when Ra = 5 × 10 7 , a Hopf bifurcation takes place (for further information on the Hopf bifurcation, see [65]).
Chaotic
Figure 17a illustrates that the two cells in the upper right portion of the biggest cell become bigger precisely as the Rayleigh number increases.The biggest cell in the center likewise travels across the right as well as the left sides when Ra = 10 8 , as seen in Figure 17b.The unsteady flow becomes increasingly complicated, which is known as chaotic.Figure 18 shows the trajectories through the stage space in the u-θ plane at point P5 (0.5, 0.255) for Ra = 5 × 10 7 and 10 8 in order to more clearly illustrate how the periodic condition changes to a chaotic state.The observed limit cycle depicted in Figure 18a demonstrates the periodic nature of the unsteady flow at a Rayleigh number of Ra = 5 × 10 7 .This finding aligns with the information presented in Figure 16 and further supports the existence of a limit cycle.In Figure 18b, the trajectory for Ra = 10 8 shows that the periodic flow changes to chaotic, which takes place between Ra = 5 × 10 7 and 10 8 .For a detailed explanation of the stage-space trajectories, see [66]. Figure 18 shows the trajectories through the stage space in the u-θ plane at point P 5 (0.5, 0.255) for Ra = 5 × 10 7 and 10 8 in order to more clearly illustrate how the periodic condition changes to a chaotic state.The observed limit cycle depicted in Figure 18a demonstrates the periodic nature of the unsteady flow at a Rayleigh number of Ra = 5 × 10 7 .This finding aligns with the information presented in Figure 16 and further supports the existence of a limit cycle.In Figure 18b, the trajectory for Ra = 10 8 shows that the periodic flow changes to chaotic, which takes place between Ra = 5 × 10 7 and 10 8 .For a detailed explanation of the stage-space trajectories, see [66].
Temperature and Velocity
Figure 19 displays the temperature and velocity at designated points P 1 (0, 0.825) over time across various Rayleigh numbers.This information is provided to help understand the formation of natural convection flow patterns within the cavity in response to sudden heating from the inclined walls and cooling from the top wall.The simulations were conducted using various Rayleigh numbers, ranging from Ra = 10 0 to 10 8 .An observation was made about the various fluctuating flow properties throughout a range of Rayleigh numbers.Figures 5, 6, 9 and 12 depict the isotherms and corresponding streamlines for various Rayleigh numbers, specifically focusing on the case where A = 0.5.The observed numerical outcomes for the various Ra values, as depicted in Figure 19, exhibit discernible changes.At the lowest Rayleigh number, convective flow instabilities may first be seen.However, the number of waves and the unsteadiness increase with increasing Rayleigh numbers.Based on the symmetry and continuous flow, it is expected that the flow is weaker and displays symmetric behavior at Ra = 10 3 .During the transitional stage of fluid flow, it is observed that the flow develops asymmetrically for the Rayleigh numbers of Ra = 10 4 , 10 5 , and 10 6 .However, as the flow progresses toward the fully developed stage, it stabilizes.Finally, the flow changes into periodic and chaotic states at Ra = 10 7 and Ra = 10 8 , respectively, as shown in Figure 19a.As illustrated in Figure 19b, a similar characteristic can also be seen in the x-velocity at the point P 1 .
Figure 18.Temperature and x-velocity trajectories in the stage space for the values of Ra = 5 × 10 7 and Ra = 10 8 at the point P5 (0.5, 0.255).
Temperature and Velocity
Figure 19 displays the temperature and velocity at designated points P1 (0, 0.825) over time across various Rayleigh numbers.This information is provided to help understand the formation of natural convection flow patterns within the cavity in response to sudden heating from the inclined walls and cooling from the top wall.The simulations were conducted using various Rayleigh numbers, ranging from Ra = 10 0 to 10 8 .An observation was made about the various fluctuating flow properties throughout a range of Rayleigh numbers.Figures 5, 6, 9, and 12 depict the isotherms and corresponding streamlines for various Rayleigh numbers, specifically focusing on the case where A = 0.5.The observed numerical outcomes for the various Ra values, as depicted in Figure 19, exhibit discernible changes.At the lowest Rayleigh number, convective flow instabilities may first be seen.However, the number of waves and the unsteadiness increase with increasing Rayleigh numbers.Based on the symmetry and continuous flow, it is expected that the flow is weaker and displays symmetric behavior at Ra = 10 3 .During the transitional stage of fluid flow, it is observed that the flow develops asymmetrically for the Rayleigh numbers of Ra = 10 4 , 10 5 , and 10 6 .However, as the flow progresses toward the fully developed stage, it stabilizes.Finally, the flow changes into periodic and chaotic states at Ra = 10 7 and Ra = 10 8 , respectively, as shown in Figure 19a.As illustrated in Figure 19b, a similar characteristic can also be seen in the x-velocity at the point P1.
Heat Transfer
During the transitional stage, convective heat transfer dominates in the V-shaped cavity.Convective heat transfer is enhanced by the irregular fluctuations and vortices that encourage fluid mixing.For air, the Nusselt number that expresses the proportion of
Heat Transfer
During the transitional stage, convective heat transfer dominates in the V-shaped cavity.Convective heat transfer is enhanced by the irregular fluctuations and vortices that encourage fluid mixing.For air, the Nusselt number that expresses the proportion of convective to conductive heat transfer is examined.The Nusselt number Nu [60][61][62] is defined as: In accordance with Figure 20a, a time series representation has been provided to illustrate the average Nusselt number associated with the inclined wall.This particular measurement has been employed to quantify the amount of heat transfer occurring across the cavity wall.It is important to highlight that Figure 20b additionally presents the representation of the Nu after being normalized by Ra 1/4 .The presence of a significant temperature difference between the fluid and the wall may contribute to the substantial heat transfer phenomenon.Consequently, it is expected that a large value of the Nusselt number (Nu) is observed.The observed phenomenon can be attributed to the simultaneous application of heating and cooling to both the inclined walls and the top wall.Figure 20b illustrates how, as time moves on, throughout the beginning stage, the Nu dramatically decreases, and throughout the fully devolved stage, it gradually attains a value that is either constant or oscillatory, and this value is determined by the Rayleigh number.This is similar to those in Figure 13, where the Nusselt number oscillates for Ra ≥ 5 × 10 7 but remains constant for Ra = 10 7 .In contrast to those in Figure 20a, it is obvious that the Nu versus the curves in Figure 20b collapse together.As a result, Nu~Ra 1/4 scaling works effectively, considering the current set of Ra values.Figure 20b demonstrates that NuRa −1/4 almost maintains a constant value of 1.473 as the Rayleigh number rises, although somewhat decreasingly.transfer phenomenon.Consequently, it is expected that a large value of the Nusselt number (Nu) is observed.The observed phenomenon can be attributed to the simultaneous application of heating and cooling to both the inclined walls and the top wall.Figure 20b illustrates how, as time moves on, throughout the beginning stage, the Nu dramatically decreases, and throughout the fully devolved stage, it gradually attains a value that is either constant or oscillatory, and this value is determined by the Rayleigh number.This is similar to those in Figure 13, where the Nusselt number oscillates for Ra ≥ 5 × 10 7 but remains constant for Ra = 10 7 .In contrast to those in Figure 20a, it is obvious that the Nu versus the curves in Figure 20b collapse together.As a result, Nu~Ra 1/4 scaling works effectively, considering the current set of Ra values.Figure 20b demonstrates that NuRa −1/4 almost maintains a constant value of 1.473 as the Rayleigh number rises, although somewhat decreasingly.
Conclusions
The current investigation delves into the numerical exploration of unsteady 2D natural convection flows within a valley-shaped cavity filled with water, where heating occurs along the inclined walls while cooling is facilitated through the top wall.The study encompasses a broad spectrum of Rayleigh numbers ranging from Ra = 10 0 to 10 8 , under the Prandtl number Pr = 0.71 and the aspect ratio A = 0.5, spanning various non-dimensional times from τ = 0 to τ = 5000.The research delineates the diverse flow structures evolving within the cavity over time and scrutinizes the relationship between heat transfer dynamics and Rayleigh numbers.
Analysis reveals that the progression of natural convection flows manifests in three discernible stages: an initial stage, a transitional stage, and a fully developed stage, following the sudden application of heating through inclined walls and cooling through the top wall.This study examines the symmetric, asymmetric, and unsteady flow patterns
Conclusions
The current investigation delves into the numerical exploration of unsteady 2D natural convection flows within a valley-shaped cavity filled with water, where heating occurs along the inclined walls while cooling is facilitated through the top wall.The study encompasses a broad spectrum of Rayleigh numbers ranging from Ra = 10 0 to 10 8 , under the Prandtl number Pr = 0.71 and the aspect ratio A = 0.5, spanning various non-dimensional times from τ = 0 to τ = 5000.The research delineates the diverse flow structures evolving within the cavity over time and scrutinizes the relationship between heat transfer dynamics and Rayleigh numbers.
Analysis reveals that the progression of natural convection flows manifests in three discernible stages: an initial stage, a transitional stage, and a fully developed stage, following the sudden application of heating through inclined walls and cooling through the top wall.This study examines the symmetric, asymmetric, and unsteady flow patterns characterizing these stages, supported by numerical findings.Specifically, the investigation primarily focuses on elucidating the flow mechanisms across all the stages.It is observed that natural convection flows remain steady at the symmetric state for Rayleigh numbers ranging from Ra = 10 0 to 7 × 10 3 .Beyond Ra > 7 × 10 3 , flows exhibit asymmetry.Furthermore, during the asymmetric state, the flow transitions into an unsteady regime in the transitional stage before stabilizing at the fully developed stage for 7 × 10 3 < Ra < 10 7 .This study highlights that periodic unsteady flows evolve into chaotic states during the transitional stage, reverting to periodic behavior in the developed stage at Ra = 5 × 10 7 , while at higher Rayleigh numbers like Ra = 10 8 , the chaotic flow remains predominant in the unsteady regime.
Additionally, the investigation discusses the notable bifurcations observed in the fully developed states.Detailed analyses, including the power spectral density and stage space trajectories for Pr = 0.71, are provided.Numerical studies elucidate the intricacies of heat transfer and demonstrate the influence of the Rayleigh number on both the Nusselt number and flow rate dynamics.
Figure 2 .
Figure 2. Flowchart of the SIMPLE method for transient flow.
Figure 3 .
Figure 3. Nusselt numbers time series at the right inclined wall in the valley-shaped cavity for Ra = 10 8 with definite grids and time steps.
Figure 4 .
Figure 4. Comparison of the experimental results of Holtzman (b,d,f) [32] for different Rayleigh numbers with the current study (a,c,e).
Figure 4 .
Figure 4. Comparison of the experimental results of Holtzman (b,d,f) [32] for different Rayleigh numbers with the current study (a,c,e).
Figure 5 .
Figure 5. Streamlines and isotherms at various non-dimensional time intervals, τ, and different small Rayleigh numbers, Ra, for the symmetric steady state.
Figure 5 .
Figure 5. Streamlines and isotherms at various non-dimensional time intervals, τ, and different small Rayleigh numbers, Ra, for the symmetric steady state.
Figure 7 .
Figure8illustrates the isotherms and streamlines corresponding to a Rayleigh number of Ra = 10 4 .Figure8ademonstrates the continued presence of clear symmetry in the flow at Ra = 7 × 10 3 .At Ra = 10 4 and Ra = 1.3 × 10 4 , the flow shows asymmetry, as depicted in Figure8b,c.It is to be noted that depending on the initial perturbations, any one of the two cells might grow bigger and move toward the cavity.When the Rayleigh number reaches a value of Ra = 1.3 × 10 4 , as depicted in Figure8c, an additional cell emerges in the top-right region of the cavity (depending on the initial perturbations), and such an asymmetric flow configuration becomes more clearly evident as being the value of the Rayleigh number increases.That is, aside from the symmetric break, the Rayleigh number is responsible for the growing number of cells within the cavity.For instance, Figure8bhas four cells for Ra = 10 4 , whereas Figure8chas five cells for Ra = 1.3 × 10 4 .The transition observed around Ra = 10 4 , where a symmetric state transforms to an asymmetric state, can be characterized as a supercritical Pitchfork bifurcation, which happens when Rayleigh-Bénard instability begins to occur.
Figure 8 Figure 7 .
Figure8illustrates the isotherms and streamlines corresponding to a Rayleigh number of Ra = 10 4 .Figure8ademonstrates the continued presence of clear symmetry in the flow at Ra = 7 × 10 3 .At Ra = 10 4 and Ra = 1.3 × 10 4 , the flow shows asymmetry, as depicted in Figure8b,c.It is to be noted that depending on the initial perturbations, any one of the two cells might grow bigger and move toward the cavity.When the Rayleigh number reaches a value of Ra = 1.3 × 10 4 , as depicted in Figure8c, an additional cell emerges in the top-right region of the cavity (depending on the initial perturbations), and such an asymmetric flow configuration becomes more clearly evident as being the value of the Rayleigh number increases.That is, aside from the symmetric break, the Rayleigh number is responsible for the growing number of cells within the cavity.For instance, Figure8bhas four cells for Ra = 10 4 , whereas Figure8chas five cells for Ra = 1.3 × 10 4 .The transition observed around Ra = 10 4 , where a symmetric state transforms to an asymmetric state, can be characterized as a supercritical Pitchfork bifurcation, which happens when Rayleigh-Bénard instability begins to occur.
Figure 8 .
Figure8illustrates the isotherms and streamlines corresponding to a Rayleigh number of Ra = 10 4 .Figure8ademonstrates the continued presence of clear symmetry in the flow at Ra = 7 × 10 3 .At Ra = 10 4 and Ra = 1.3 × 10 4 , the flow shows asymmetry, as depicted in Figure8b,c.It is to be noted that depending on the initial perturbations, any one of the two cells might grow bigger and move toward the cavity.When the Rayleigh number reaches a value of Ra = 1.3 × 10 4 , as depicted in Figure8c, an additional cell emerges in the top-right region of the cavity (depending on the initial perturbations), and such an asymmetric flow configuration becomes more clearly evident as being the value of the Rayleigh number increases.That is, aside from the symmetric break, the Rayleigh number is responsible for the growing number of cells within the cavity.For instance, Figure8bhas four cells for Ra = 10 4 , whereas Figure8chas five cells for Ra = 1.3 × 10 4 .The transition observed around Ra = 10 4 , where a symmetric state transforms to an asymmetric state, can be characterized as a supercritical Pitchfork bifurcation, which happens when Rayleigh-Bénard instability begins to occur.
Figure 11 .
Figure 11.At the fully developed stage, streamlines and isotherms for various Rayleigh numbers.
Figure 11 .
Figure 11.At the fully developed stage, streamlines and isotherms for various Rayleigh numbers.
Figure 13 .
Figure 13.A time series of the temperature at the completely developed stage and the power spectral density at point P5 (0.5, 0.255) (a) for Ra = 10 7 , (b,c) for Ra = 5 × 10 7 , and (d,e) for Ra = 10 8 .
Figure 13 .
Figure 13.A time series of the temperature at the completely developed stage and the power spectral density at point P 5 (0.5, 0.255) (a) for Ra = 10 7 , (b,c) for Ra = 5 × 10 7 , and (d,e) for Ra = 10 8 .
Figure 16
Figure16shows the attractors with values ranging from τ = 300 to 2000 for Ra = 10 7 and τ = 1000 to 1500 for Ra = 5 × 10 7 at the defining point P1 (0, 0.825) in order to facilitate an understanding of the Hopf bifurcation, which takes place during the transition from
Figure 16
Figure16shows the attractors with values ranging from τ = 300 to 2000 for Ra = 10 7 and τ = 1000 to 1500 for Ra = 5 × 10 7 at the defining point P1 (0, 0.825) in order to facilitate an understanding of the Hopf bifurcation, which takes place during the transition from
Computation 2024, 12 , 146 22 of 27 Figure 18 .Figure 18 .
Figure 18.Temperature and x-velocity trajectories in the stage space for the values of Ra = 5 × 10 7 and Ra = 10 8 at the point P5 (0.5, 0.255).5.4.Temperature and VelocityFigure19displays the temperature and velocity at designated points P1 (0, 0.825) over time across various Rayleigh numbers.This information is provided to help understand the formation of natural convection flow patterns within the cavity in response to sudden
Figure 19 .
Figure 19.(a) Temperature time series and (b) x-velocity time series for different Rayleigh numbers at point P1 (0, 0.825).
Figure 19 .
Figure 19.(a) Temperature time series and (b) x-velocity time series for different Rayleigh numbers at point P 1 (0, 0.825).
Figure 20 .
Figure 20.(a) The Nusselt number and (b) the normalized Nusselt number time series for several kinds of Rayleigh numbers.
Figure 20 .
Figure 20.(a) The Nusselt number and (b) the normalized Nusselt number time series for several kinds of Rayleigh numbers.
Flowchart of the SIMPLE method for transient flow.
Table 1 .
Nusselt numbers (Nu) for different grids and time steps.Nusselt numbers time series at the right inclined wall in the valley-shaped cavity for Ra = 10 8 with definite grids and time steps.
Table 1 .
Nusselt numbers (Nu) for different grids and time steps.
Table 2 .
Velocities in the x-direction for different Rayleigh numbers at point P 2 (0, 0.46).
Table 3 .
Number of cells with corresponding Rayleigh numbers.
coordinates in the horizontal and vertical x, y the non-dimensional coordinates in the horizontal and vertical | 16,070.6 | 2024-07-14T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
GAP listing of the finite subgroups of U(3) of order smaller than 2000
We have sorted the SmallGroups library of all the finite groups of order smaller than 2000 to identify the groups that possess a faithful three-dimensional irreducible representation (`irrep') and cannot be written as the direct product of a smaller group times a cyclic group. Using the computer algebra system GAP, we have scanned all the three-dimensional irreps of each of those groups to identify those that are subgroups of SU(3); we have labelled each of those subgroups of SU(3) by using the extant complete classification of the finite subgroups of SU(3). Turning to the subgroups of U(3) that are not subgroups of SU(3), we have found the generators of all of them and classified most of them in series according to their generators and structure.
Introduction
Many high-energy physicists are thrilled by the prospect that the numerical entries of the leptonic mixing matrix (PMNS matrix) might be related to some small (or maybe not so small) finite group. Many specific finite groups have been considered, like for instance A 4 [1], S 4 [2], S 3 [3], T 7 [4], A 5 [5], ∆(27) [6], the group series ∆ (6n 2 ) [7], the groups Σ (nϕ) [8], and so on. Most of the finite groups considered are subgroups of SU (3); those subgroups are especially inviting because a complete classification of them, and their generators, have been known for over a century [9]. On the contrary, there is no complete classification of the finite subgroups of U(3), 1 though a few series of those subgroups have been derived in ref. [10]. At least one finite subgroup of U(3) has already been utilized in particle physics [11].
Although a full theoretical study of each individual group can always be undertaken, for large groups such a study becomes impractical and it is convenient to have recourse to the computer algebra system GAP, which is tailored to deal with finite groups and can readily furnish the structure, irreducible representations ('irreps'), character table, and so on, of each of them. GAP is supplemented by the SmallGroups library, which contains, in particular, all the finite groups of order smaller than 2 000. In that library each finite group has an identifier [o, j], where o ≥ 1 is the order, i.e. the number of elements, of the group and j ≥ 1 is an integer which distinguishes among the non-isomorphic groups of identical order. For instance, the group with SmallGroups identifier [4,1] is the cyclic group 2 4 ∼ = {1, i, −1, −i} while the group with SmallGroups identifier [4,2] is the direct product of cyclic groups 2 × 2 ∼ = { (1,1) , (1, −1) , (−1, 1) , (−1, −1)}; SmallGroups informs us that there are, in fact, only these two non-isomorphic groups with four elements. A SmallGroups listing of all the finite groups of order up to 100, together with their structure, 3 was published in ref. [13]. A SmallGroups listing of the finite groups of order up to 512 that have a faithful three- 1 In this paper, whenever we use the expression "finite subgroups of U (3)" we usually mean only the subgroups of U (3) that are not subgroups of SU (3).
2 SmallGroups uses C n to denote the cyclic group of order n, instead of the more usual notation n . SmallGroups uses the notation E(n) for the n'th root of unity. 3 SmallGroups informs us about the structure of each group. This is given in terms of direct products (denoted '×'), semi-direct products (denoted '⋊'), or group extensions (denoted '.'). A pedagogical explanation of these concepts may be found, for instance, in ref. [12]. dimensional irrep and are not the direct product of a cyclic group and some other group was published in ref. [10].
However, SmallGroups lists the groups of the same order in a way that does not allow one to extract much information on them. For instance, the group [12,3] ∼ = A 4 is a subgroup of SU (3) and has a three-dimensional faithful irrep; the groups [12,1] and [12,4] ∼ = D 6 are subgroups of SU(3) but do not possess three-dimensional irreps; the group [12,2] ∼ = 12 is a subgroup of U(1) ⊂ U (3); the group [12,5] ∼ = 6 × 2 is a subgroup of U(1) × U(1) but not of U(3).
The first step in this work was to survey the whole SmallGroups list of groups of order smaller than 2 000 in order to identify the ones that have at least one faithful three-dimensional irreducible representation; cannot be written as the direct product of a smaller group and a cyclic group.
The second step in this work was to pick each of the groups above and ask GAP to compute the determinant of each of the matrices in each of its threedimensional representations. If there is a three-dimensional representation in which all the matrices have unit determinant, then the group is a subgroup of SU (3); otherwise the group is not a subgroup of SU(3) but it is a subgroup of U(3)-because every representation of a finite group is equivalent to a representation through unitary matrices. In this way, we have separated the subgroups of SU(3) from the subgroups of U (3).
A complete classification of all the finite subgroups of SU(3) has long existed [9]. There are groups (so-called type A) of diagonal matrices, i.e. Abelian groups; they may be written as direct products of cyclic factors and do not concern us here. Then there are the subgroups of U (2), which are called type B; their three-dimensional representations are (just as the ones of type A subgroups) reducible and therefore they also do not concern us. Of interest to us are the type C and type D groups, which were best characterized in ref. [14], and also the 'exceptional' groups. In this work we give the SmallGroups identifiers of all the SU(3) subgroups of types C and D, together with their classification according to ref. [14], and also the SmallGroups identifiers of the exceptional subgroups. This is done in section 3.
There is no theoretical classification of all the finite subgroups of U(3). We feel that having a complete listing of all those subgroups of order less than 2 000, together with their generators, may be a useful step towards achieving such a classification; at the very least, it allows one to get a feeling for what it might look like. Therefore, in this work we give the SmallGroups identifiers of all the finite U(3) subgroups, together with their generators. We also partially unite those subgroups in series, viz. in sets of groups that have related generators depending on one, two, or sometimes three integers. This is done in section 4.
We also give, for every finite subgroup of U (3), the dimensions of all its inequivalent irreps, as determined by GAP.
In section 2 we explain our procedure. In an appendix we provide tables of all the finite subgroups of U(3) that have a faithful three-dimensional irrep and are not isomorphic to the direct product of a smaller group and a cyclic group. We give separate tables for the groups that are subgroups of SU (3) and for the groups that are not subgroups of SU (3). In those tables, we order the groups according to their SmallGroups classification, viz. in increasing order first of o and then of j in their [o, j] identifiers.
GAP procedures
GAP [15] is a computer algebra system that provides a programming language, including many functions that implement algebraic algorithms. It is supplemented by many libraries containing a large amount of data on algebraic objects. Using GAP it is possible to study groups and their representations, display the character tables, find the subgroups of larger groups, identify groups given through their generating matrices, and so on.
GAP allows access to the SmallGroups library through the SmallGroups package [16]. That library contains all the finite groups of 'small' orders, 4 viz. less than a certain upper bound and also orders whose prime factorization is small in some sense. The groups are ordered by their orders; for each of the available orders, a complete list of non-isomorphic groups is given. SmallGroups contains all the groups of order less than 2 000 except order 4 The order of a finite group is the number of its elements. 4 1024, because there are many thousands of millions of groups of order 1024. SmallGroups also contains other groups with some specific orders larger than 2 000.
The SmallGroups library has an identification function which returns the SmallGroups identifier of any given group. For each generic group in the library there are effective recognition algorithms available. To identify encoded and insoluble groups, two approaches are used: one is a general algorithm to solve the isomorphism problem for p-groups, 5 the second one uses the invariants 6 of stored groups [17]. Using these methods, it is possible to identify all the groups in the library, except for orders 512, 1536, and some orders above 2 000. For the identification of groups we use GAP command In our work, firstly we have scanned the SmallGroups library and extracted therefrom all the groups with three-dimensional irreps. Using the GAP command one lets G denote the group with identifer [o, j] in the SmallGroups library. The command allows one to find out how many groups there are for a chosen order o and thus automates the scanning of library. For a given group G, GAP offers the possibility to calculate the irreps by using the command repG := IrreducibleRepresentations(G).
It is possible to display all the irreps by using the GAP command where p is a prime number, is a group in which each element has a power of p as its order. That is, for each element g of a p-group, there is a non-negative integer n such that the product of p n copies of g, and not less, is equal to the identity element e. (But, the integer n is in general different for different elements g of the group.) 6 In the SmallGroups library there is a list of distinguishing invariants for all encoded groups except those of orders 512 and 1536. This list of invariants is compressed. It provides an efficient approach to identify any encoded group in the library. too; however, the labeling of the irreps may differ from the labeling received through the command It is convenient to select all the three-dimensional irreps by using the command One may select all the elements of a given group G through the command elG := Elements(G).
Then, the command where the integer i parameterizes the loop, allows one to list all the elements of the chosen irrep. We have selected the groups from the SmallGroups library that have at least one faithful 7 three-dimensional irrep. Then, by using the GAP command that gives the structure of a group, viz.
we have discarded the groups that are direct products with a cyclic group. There are 10 494 213 groups of order 512 and 408 641 062 groups of order 1536. However, the groups of order 512 do not possess three-dimensional irreps because 512 cannot be divided by three, therefore we did not need to consider them. On the other hand, the number of groups of order 1536 is too large for all of them to be scanned in the way described above. Therefore, we have used the conjecture in ref. [18] that both nilpotent groups and groups with a normal Sylow 3-subgroup 8 do not have three-dimensional faithful irreps. Utilizing the command 7 In order to identify the faithful irreps, we have compared all the matrices in each irrep. If different elements of the group are represented by different matrices in the irrep, then the irrep is faithful. 8 These two concepts of group theory have been explained in ref. [19].
6
one gets the information about the arrangement of the groups of a given order. Using this information, we have determined the scanning range of groups of order 1536. To check whether the group is nilpotent, the command may be used, while gives the nilpotency class of the group G. The Sylow 3-subgroups of a group G may be found by typing the command We have found that only four groups of order 1536 have faithful threedimensional irreps and cannot be written as the direct product of a smaller group and a cyclic group. For groups that have faithful three-dimensional irreps, we have asked GAP to compute the determinant of each of the matrices in each of its threedimensional representations. This was done through the command If there is a three-dimensional representation in which all the matrices have unit determinant, then the group is a subgroup of SU (3); if there is no such representation, then the group is not a subgroup of SU (3), but it is a subgroup of U(3) because it has a three-dimensional representation and because all the representations of finite groups are equivalent to representations through unitary matrices.
We have used different methods in order to classify the groups in the lists of the subgroups of U(3) and SU(3). One of the methods is the analysis of the generators of the three-dimensional irreps. The command returns a list of generators of the group G. The generators of the threedimensional irreps may be listed through the command By looking at these lists we have tried to find regularities in the generators. Another strategy was looking at the structures of the groups and sorting groups with analogous structures. When one has some generators, say three matrices M1, M2, and M3, a group G may be generated through the command Afterwards this group may be identified by finding its order, using the command Order(G) (19) or by counting the elements of the group through Size(elG).
Afterwards one may discover the SmallGroups identifier of G by using the command IdGroup(G).
The identification of some groups with large order may require a long computational time, therefore some hints about the group classification may be acquired by analyzing the group structure-using the command (10)-or by comparing the traces of the group matrices, determined through the command List(elG, x-> Trace(x)).
3 Finite subgroups of SU (3) In this section we give the generators and the SmallGroups identifiers of all the finite subgroups of SU(3) that • have a faithful three-dimensional irrep, • cannot be written as the direct product of a smaller group and a cyclic group, • have less than 2 000 elements. 8
Generators
We firstly define a few 3 × 3 matrices that act as generators of the various SU(3) subgroups. All those matrices have, of course, unit determinant.
The matrices are especially useful. Let n ≥ 1 be an integer. Then, Let n ≥ 1 and k ≥ 1 be integers. We define Let n ≥ 1 and r ≥ 1 be integers. We define i.e. G n,r = (L n ) −r .
3.2 The groups ∆ 3n 2 and ∆ 6n 2 For n ≥ 1, the groups ∆ (3n 2 ) have structure ( n × n ) ⋊ 3 and order 3n 2 ; 9 the groups ∆ (6n 2 ) have structure [( n × n ) ⋊ 3 ] ⋊ 2 and order 6n 2 . The group ∆ (3n 2 ) is generated by the matrices E and L n ; the group ∆ (6n 2 ) is generated by the matrices E, I, and L n . The SmallGroups identifiers of the groups ∆ (3n 2 ) of order smaller than 2 000 are given in table 1; 10 the SmallGroups identifiers of the groups ∆ (6n 2 ) of order smaller than 2 000 are given in table 2. 11 9 We adopt the convention that 1 is the trivial group, i.e. the group that has only one element, viz. the identity element e. 10 The group ∆ 3 × 1 2 ∼ = 3 ∼ = [3,1] is not included in table 1 because it is a cyclic group. 11 The group ∆ 6 × 1 2 ∼ = S 3 ∼ = [6,1] is not included in The group ∆ (3 × 2 2 ) is isomorphic to A 4 , the group of the even permutations of four objects, and also to the symmetry group of the regular tetrahedron. The group ∆ (6 × 2 2 ) is isomorphic to S 4 , the group of all the permutations of four objects, and also to the symmetry group of the cube and of the regular octahedron.
The groups C
(k) n,l We use the notation of ref. [14]. The groups C (k) n,l have structure ( n × l )⋊ 3 and order 3nl. The integer l is positive. The integer n may be written n = rl, where r is another positive integer. The integer r may be either 1. a product of prime numbers p 1 , p 2 , . . . which are of the form p j = 6i j +1, where the numbers i j are integers, or 2. three times a product of prime numbers as in 1.
In case 1, l may be any positive integer; in case 2, l must be a multiple of three. The integer k is a function of r defined by 1 + k + k 2 = 0 mod r and k ≤ (r − 1)/ 2. For most values of r there is only one possible k, but for some r more than one (usually two) k are possible. The values of r, k, and l that produce groups C (k) n,l with order smaller than 2 000 are given in tables 3 and 4. There is a very large number of groups C (k) n,l of order smaller than 2 000, therefore we opt for giving their SmallGroups identifiers only in the appendix.
The groups C (k) n,l only have singlet and triplet irreps. The number of inequivalent singlet irreps is three when l cannot be divided by three and nine when l is a multiple of three.
The groups D
(1) 3l,l We continue to use the notation of ref. [14]. For an integer l that is a multiple of three, the groups D (1) 3l,l have structure [( 3l × l ) ⋊ 3 ]⋊ 2 and order 18l 2 . They are generated by the matrices E, I, and B 3l,1 = diag (ν, ν, ν −2 ) for ν = exp [2iπ/(3l) ]. There are only three groups D (1) 3l,l of order smaller than 2 000: The groups D 3l,l have six inequivalent singlets and three inequivalent doublets for any value of l. Besides, they have 6(l − 1) inequivalent triplet irreps and l(l − 3)/2 + 1 inequivalent six-plets. (3) The groups ∆ (3n 2 ) and C (k) n,l form the class C of finite subgroups of SU (3). The groups ∆ (6n 2 ) and D (1) 3l,l form the class D of finite subgroups of SU (3). Both classes C and D contain infinite numbers of subgroups. Besides these infinite classes of subgroups, SU(3) has six 'exceptional' subgroups; 13,14 their SmallGroups identifiers are given in table 5. The generators of the exceptional subgroups are given, for instance, in ref. [10], together with the references to the original papers.
The exceptional subgroups of SU
The group Σ (60) is isomorphic to A 5 , the group of the even permutations of five objects, and to the symmetry group of the regular icosahedron and regular dodecahedron. The group Σ (168) is isomorphic to the projective special linear group P SL (2,7) and also to the general linear group GL (3,2).
The number of inequivalent p-dimensional irreps of the exceptional finite subgroups of SU (3) is given in table 6 [21].
Finite subgroups of U (3)
In this section we give the generators and the SmallGroups identifiers of all the finite subgroups of U(3) that • are not subgroups of SU (3), • have a faithful three-dimensional irrep, • cannot be written as the direct product of a smaller group and a cyclic group, (3).
For most groups, we also give the numbers of inequivalent irreps of each dimension.
There is at present no mathematical classification of the finite subgroups of U(3). Therefore, we will just classify the various subgroups that we have found using the SmallGroups library and GAP, by constructing 'series' of subgroups that have generators, structures, and numbers of irreps related among themselves. Unfortunately, there is some degree of ambiguity in this task, since any group may always be generated by different sets of generators. It is moreover often found that groups with related generators end up having quite different structures. Still, we hope to be able to shed some light on the possible types of subgroups of U(3).
The generators
We firstly define some 3 × 3 matrices that often appear as generators of the U(3) subgroups. Let • r be a product of prime numbers p 1 , p 2 , . . . which are of the form p j = 6i j + 1, where the numbers i j are integers; • k be an integer which is a function of r defined by 1 + k + k 2 = 0 mod r and k ≤ (r − 1)/ 2. For most values of r there is only one possible k, but for some r more than one k are possible.
The lowest r and the corresponding k are given in table 7. In this section, whenever we let r and k denote a pair of integers, we will be referring to one of the pairs in table 7. The matrix appears as generator of many U (3) subgroups. Notice that B r,k ∈ SU (3).
We use the definition of L n in equation (24). Notice that L n ∈ SU (3).
is especially useful. We will also encounter Let m be an integer. We define The matrix E ≡ E 0 in equation (23a) is especially useful. Both E 0 and E 1 have unit determinant, but E m / ∈ SU (3) for m > 1. Let m and j be integers. We define Notice that F m,j / ∈ SU(3) for m ≥ 2 or j ≥ 1. The matrix I ≡ F 0,0 in equation (23b) has already been useful; also useful is Let ω = exp (2iπ/3) and µ = exp [2iπ / (3 m )]. We define Let ω = exp (2iπ/3). We define Notice that K ∈ SU (3).
Notice that det Q m,j = ξ 3 = 1 in general.
The series of groups that Ludl has discovered
Ludl [10] has proved the existence of the following series of finite subgroups of U (3).
where m is an integer larger than 1, 15 has structure r ⋊ 3 m and order 3 m r. The groups T (k) r (m) of order smaller than 2 000 are given in table 8. Each of these groups has two generators, which may be chosen to be B r,k in equation (28) and E m in equation (31a).
The groups T (k) r (m) have 3 m inequivalent singlet irreps; all the remaining irreps of those groups are triplets.
Groups ∆ (3n 2 , m): The group ∆ (3n 2 , m), where the integer n cannot be divided by 3 and m > 1, 16 has structure ( n × n ) ⋊ 3 m and order 3 m n 2 . The groups ∆ (3n 2 , m) of order less than 2 000 are listed in table 9. The group ∆ (3n 2 , m) is generated by the matrices L n in equation (24) and E m in equation (31a).
Groups S 4 (j): The group S 4 (j), where j > 1, 17 has structure A 4 ⋊ 2 j and order 3 × 2 j+2 . 18 There are six groups S 4 (j) of order smaller than 2 000; they are given in table 10. The group S 4 (j) is generated by the matrices E in equation (23a), L 2 in equation (29), and −F 0,j , where F m,j is given in equation (32).
New series of groups that we have discovered
Ludl [10] has derived the existence of the series of groups in the previous subsection by applying mathematical theorems that he demonstrated. We have discovered some further series of groups through a careful inspection of the list of all the finite subgroups of U(3) of order smaller than 2 000 that we have produced, together with some guesswork. Clearly, since there are no theorems supporting our method, we cannot be sure that our series of groups extend to groups of order larger than 2 000. Still, the series of groups in this subsection seem to us to be on firm standing, since they are quite large and display no exceptions up to group order 2 000.
Groups L (k) r (n, m): For an integer n that cannot be divided by 3 and for m > 1, these are groups with structure ( rn × n ) ⋊ 3 m and order 3 m rn 2 . While the groups T (k) r (m) are generated by the matrices B r,k and E m , and the groups ∆ (3n 2 , m) are generated by the matrices L n and E m , the groups L (k) r (n, m) are generated by all three matrices B r,k , L n , and E m . Thus, the groups L Groups X(n): There are several groups that have a three-dimensional irrep where all the matrices are of one of the following types [10]: where ν = exp (2iπ/n). We call them 'groups RVW'. The groups X(n) are groups RVW where • n is a multiple of 3, • the matrices R (n, a, b, c) have a + b + c = (n/3) mod n, • the matrices V (n, a, b, c) have a + b + c = (2n/3) mod n, • the matrices W (n, a, b, c) have a + b + c = 0 mod n.
The groups X(n) have order 3n 2 ; the identifiers of the groups of order less than 2 000 are in table 15. The structure of X(n) is n/3 × n/3 ⋊ 9 ⋊ 3 provided n is not a multiple of 9; otherwise it is more complicated. The groups X(n) are generated by the matrices L n in equation (24) and Z 1 in equation (31b).
The groups X(n) have nine inequivalent singlets; their remaining irreps are all triplets.
Tentative series of groups
We have found a few more series of groups through inspection of the list of the finite subgroups of U(3) of order less than 2 000. However, these series have few groups each and we can hardly ascertain whether and how they extend to groups of order larger than 2 000.
Groups S
; they all have order 3 m+2 r. The generators are the matrices B r,k in equation (28), together with • E in equation (23a), L 3 in equation (30), and X 3 (m) in equation (34e) for S 19 (2) Groups W (n, m): The groups W (n, m), where n cannot be divided by 3 and m > 1, are generated by the matrices E in equation (23a), L n in equation (24), and Y 1 (m) in equation (34c). They have structure ( 3 m n × n )⋊ 3 and order 3 m+1 n 2 . The groups W (n, m) with order smaller than 2 000 are listed in table 18.
Each of the groups W (n, m) has 3 m inequivalent singlets; the remaining irreps of those groups are triplets.
Groups Z (n, m), Z ′ (n, m), and Z ′′ (n, m): These groups, where n is a multiple of 3 and m > 1, have structure 23 ( 3 m−1 n × n ) ⋊ 3 and order 3 m n 2 . The groups with order smaller than 2 000 are listed in table 19. The generators of Z (n, m) are just the same as those of W (n, m), viz. E, L n , and Y 1 (m)-the only difference being that for Z (n, m) the integer n is a multiple of 3 while for W (n, m) the integer n cannot be divided by 3. The groups Z ′ (n, m) are generated by the matrices E, L n , and X 1 (m). The groups Z ′′ (n, m) are generated by the matrices E, L n , and X 2 (m). Notice that, for m = 2, Z ′′ (n, m) is generated by matrices with unit determinant and therefore it is a subgroup of SU (3).
Each of the groups Z (n, m) and Z ′′ (n, m) has 3 m+1 inequivalent singlets. The groups Z ′ (n, m) have 3 m inequivalent singlets. All the remaining irreps of all those groups are triplets.
• n is a multiple of 3, • m > 1, • j is an integer, have order 3 m 2 j n 2 . The groups Z (n, m, j) and Z ′ (n, m, j) with order smaller than 2 000 are in table 20. 24 The groups Z (n, m, j) and Z ′ (n, m, j) are generated by the same matrices as the groups Z ′ (n, m) and Z ′′ (n, m), respectively, with the addition of the further generator −F 1,j , where F m,j is given in equation (32). Notice that there are no groups Z ′ (n, 2, 1) in table 20, because all the matrices generating Z ′ (n, 2, 1), viz. E, L n , X 2 (2), and −F 1,1 have unit determinant and therefore Z ′ (n, 2, 1) is a subgroup of SU (3).
Groups H (n, m, j): When we use generators E, L n , X 1 (2), and −F m,j with m > 1, we obtain groups that we call H (n, m, j) and list in The SmallGroups identifiers of the groups G (m, j) with order smaller than 2 000. order 3 m+1 × 2n 2 . The groups H (n, m, j) with j > 1 are described in the paragraph of groups G (m, j) below.
The groups H (n, m, j) have exactly the same number of inequivalent irreps of each dimension as the groups Z (n, m + 1, j) and Z ′ (n, m + 1, j).
Groups Y (m, j): The groups Y (m, j), where m ≥ 2 and j ≥ 1, have structure [( 2 j × 2 j ) ⋊ 3 m+1 ] ⋊ 3 and order 3 m+2 4 j . There are only three groups Y (m, j) of order smaller than 2 000: The groups Y (m, j) are generated by L 3 in equation (30) [1944,707] appear in table 21 too.) The groups G (m, j) are generated by the matrices E, −F m,j , where F m,j is given in equation (32), and diag (1, 1, ω). For m = 1 and j = 2 one may add a fourth generator L 2 , given in equation (29), to obtain the group [1296,699], which has structure The groups G (m, j) have exactly the same number of inequivalent irreps of each dimension as the groups Z (3, m + 1, j) and Z ′ (3, m + 1, j).
Groups U (n, m, j): The groups U (n, m, j), where n is a multiple of 3, m > 1, and 1 < j ≤ m, have structure ( 3 m−1 n × n × 3 ) ⋊ 3 and order 3 m+1 n 2 . We have found the following groups U (n, m, j) with order smaller than 2 000: The generators of U (n, m, j) are the matrix E together with diag ν, ν, ν 2 , where ν = exp (2iπ / n ), and Notice that, when j = m-this happens in three out of the four groups U (n, m, j) in (42)-the matrix (44) reduces to the matrix Y 1 (m) in equation (34c). The groups U (n, m, j) possess 3 j+1 inequivalent singlet irreps; all their remaining irreps are triplets.
Groups V (j): The groups V (j) have order 81 × 4 j and structure There are three groups V (j) with order smaller than 2 000: The generators of V (j) are the matrices Z 1 , X 2 (2), and L 2 j . The groups V (j) have nine singlet irreps. All their other irreps are triplets.
Groups D(j): The groups D(j) have structure ( 9×2 j × 9×2 j ) ⋊ 3 and order 243×4 j . They are generated by the matrices E 2 , L 2 j , and T 1 (2). There are two groups of order smaller than 2 000: Both these groups have nine inequivalent singlets; their other irreps are triplets.
Groups J(m): The groups J(m) have structure 3 m . [( 9 × 3 ) ⋊ 3 ] and order 81 × 3 m . They are generated by the matrices Z m and L 9 . There are two groups of order smaller than 2 000: Notice that J(1) coincides with X(9) in table 15. The groups J(m) have 3 m+1 singlets; their other irreps are triplets.
The generators of a few more groups
In this subsection we collect a few more groups together with their generators.
The groups Θ(m) have as many inequivalent irreps of each dimension as groups Π (m, 1).
Notice that all three generators of Υ ′ (2) have unit determinant and therefore Υ ′ (2) is a subgroup of SU(3).
Conclusion
In this paper we have used the SmallGroups library to search for all the finite subgroups of U(3) of order less than 2 000 that have a faithful threedimensional irreducible representation and that cannot be written as the direct product of some smaller group and a cyclic group. We have found that there are three types of finite subgroups of U(3): • Groups that have a three-dimensional representation consisting solely of matrices of the forms (37) for some value of n. Those groups only have singlet and triplet irreducible representations.
• Groups that have a three-dimensional representation consisting solely of matrices of the forms (37) and (52) for some value of n. Those groups only have singlet, doublet, triplet, and six-plet irreducible representations.
• Groups that do not have a three-dimensional representation consisting solely of matrices of the forms (37) and (52) We were able to group most finite subgroups of U(3) in many series depending on one, two, or sometimes three integers; the groups in each series have related generators and related numbers of irreps of each dimension. Unfortunately, many of these series have very few groups and we do not know whether and how they extend to groups of order higher than 2 000. It is possible (and it would be desirable) that some of these series may be further unified among themselves. | 8,316 | 2017-01-31T00:00:00.000 | [
"Mathematics"
] |
Control Strategies for solar façade panels coupled with a Heat Pump and interacting with a District Heating Network
. This work aims to understand the potential of an innovative technology for solar energy harvesting in a District Heating Network (DHN). The considered technology is aesthetic solar façade thermal panel. In order to guarantee the temperatures required by a 3rd generation DHN (around 75°C), a Heat Pump, using as cold source the heat from the panels, is necessary. It is worth noting that the coupling between façade panels and Heat Pump requires accurate evaluations. The optimum condition for the façade panels is to work at low temperatures (close to ambient or even below), while the Heat Pump reaches high Coefficient Of Performance (COP) when the temperature difference between hot and cold sources is minimized. In the first part of the study, a system model has been built using Matlab SIMULINK using results of tests on the panels already performed inside the H2020 ENVISION project. Different colours are considered. In the second part, a predictive mode-based strategy has been defined and tuned on the system in order to guarantee the best system performances in interaction with the DHN. This work will allow to understand whether this technology is feasible in the presented scenario and this layout can improve local energy exchange.
Introduction
The need of pollutant emission reduction requires an evolution in structure and interconnection of cities [1]. Particularly, the impact of the buildings on pollutant emissions is significant and strong adjustments must be considered to move towards a sustainable building profile. In these terms, the energy positive building framework, where buildings are able to produce energy and provide the difference to the grid has become more relevant [2]. This means to renovate the concept of the building itself -and an emerging way is to convert the building surfaces into active energy generator using renewable sources (e.g. solar radiation, wind) [3]. The conversion of the building is the first step, pushed forward by interconnecting these buildings into a smart grid environment, in which energy flows are redistributed according to local needs [4]. The H2020 ENVISION project focuses on these aspects, starting from buildings renovation to integration into existing smart grid. This last ENVISION has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 767180 aspect is investigated by Thermochemical Power Group (TPG) at the Smart Polygeneration Microgrid (SPM) of Savona Campus of University of Genoa (UNIGE) [5]. One of the most critical aspects about the interconnection with a SPM consists into the system management, considering water temperature delivery, energy share between generators involved and storage management. In order to guarantee a balanced and consistent energy supply to the grid a proper control system is required: this problem is faced via implementation of Model Predictive Control (MPC) technique.
System description
At Savona Campus, the solar façade panel technology [6] will be tested. The technology consists of unglazed solar thermal panels that can be easily installed on a building vertical façade using a click-on method. The panels, meeting the different aesthetic requirements, are available in many different colours. The thermal energy provided by these panels has a different temperature depending on many different operating conditions. The District Heating Network (DHN) in Savona Campus requires a constant inlet temperature of 75°C. Therefore, the direct coupling of the considered façade panels with the DHN is not possible: a Heat Pump (HP) bridges this temperature gap. The layout of the described system is reported in Figure 1. Considering also ambient conditions (e.g. wind velocity, ambient temperature) [7]. The panels will be available in different sizes, for the prototype the dimensions are 2x1 meters. Considering the available building surface, 24 panels will be installed in Savona Campus (8 for each colour). In the façade solar panels flows a mixture of water and glycol in order to enhance the thermal exchange avoiding ice formation in the pipes. These panels have been modelled in Matlab/Simulink to test their simulated performances depending on the different ambient conditions actually measured on site at Savona Campus. Time-variant energy equations are computed inside the model, using as inputs the ambient parameters (such as solar radiation, air temperature and wind velocity) the energy absorbed by the fluid flowing through the panel pipes is provided. The importance of this model is also related to the evaluation of the best operational parameters in coupling with the Heat Pump. The considered parameters are the interconnection between the panels (series or parallel) and the mass flow rate through the panels. It is worth to specify that the series connection consists in single branches (i.e. White-Red-Black in series), interconnected in parallel each other (see Figure 2).
Heat Pump
The HP is installed in the system in order to provide heat at the constant temperature of 75°C degree to DHN. Considering the temperatures range, the HP working fluid is R134a. In order to maintain the desired temperature at the DHN inlet, a PI control is implemented to regulate the compressor power according to the flow temperature setpoint at the outlet of the hot side. The HP (water-to-water with volumetric compressor) and PI controller have been implemented in Matlab/Simulink environment [8]. Also in this case the governing equation of the model is the time-variant energy equations (implemented in evaporator and condenser block). The compressor chosen for this application is volumetric; its characteristics have been implemented in the code adding some physical delays in its operation. The main inputs for the Heat Pump model are the mass flow rate and the temperature in the inlet of evaporator (from the façade panels) and condenser (in return of DHN). It is worth underline that actually the PI controller is set to 80°C since an intermediate heat exchanger between the internal circuit and DHN will be installed, this allows to come over the thermal losses through the intermediate heat exchanger guaranteeing 75 °C inside the real DHN circuit.
Best system configuration
One of the most critical aspects for coupling solar façade panels with a heat pump is to find the configuration that leads to the best overall system performance. To do that different schemes have been simulated varying the mass flow rate through the single panel (min: 0.01 kg/s, max: 0.05 kg/s as available in literature for the thermal and PV thermal panels [9][10]) and the series or parallel configuration. The tests have been performed supposing a constant heat requirement by the DHN at the constant temperature of 75°C, this aspect allows to minimize the changing parameters focusing on the façade panel performances). In this scenario, the parameter that allows the system evaluation is the COP (Coefficient Of Performance) i.e. the index of efficiency of the HP. Considering the compressor power absorption of the HP as the only term of significant consumption in the system, the best system configuration leads to the lowest power consumption by the compressor. This consideration suggests that the most important parameter is the quantity of heat produced by the façade panels and not just the temperature reached by the water-glycol mixture. This consideration leads to a compromise between mass flow rate through the panels and high temperature at the panels outlet. Simulation for different scenarios have been performed in Simulink environment considering two representative solar days for cold and hot season.
As it can be seen in Figure 3 the highest COP values are reached for the interconnection in series with the highest value of mass flow rate through the panels in both periods. The heat provided by the panels is given by temperature and mass flow rate. By increasing the mass flow rate, the temperature at the outlet of the panels decreases. In addition, an increase of mass flow rate leads to a higher temperature at the inlet of façade panels circuit because the same heat absorption in the HP evaporator is achieved with a lower temperature difference through the evaporator heat exchanger due to the total mass flow rate increasing in the panels circuit (Eq.1). This aspect leads to a lower panel efficiency and therefore lower heat absorption by the heat pump.
= ̇ * ( ) * ( − ) * (1) The parallel configuration at one side guarantees a higher value for the total mass flow rate but on the other sides does not lead to high temperatures as the series configuration does (see Figure 4).
Figure 4 Inlet and Outlet temperature for the panels in different configurations during hot season
Therefore, considering the big number of parameters affecting the system behaviour the reliable optimum conditions can only be discovered with an accurate model that will be validated using the upcoming test campaign on the panels. In conclusion, it can be stated that, considering the model simulations, the configuration considered from now on is the one with the max mass flow per panel (0.05 kg/s) and a series interconnection between the panels. Table 1 reports a schematic summary of the obtained results.
Case Study
The definition on the case study is based on the foreseen application. The creation of energy positive buildings without the capability to operate into a multi commodities environment cannot be considered. Therefore, it has been considered the afore mentioned system composed by panels and HP as integrated into a smart grid where other local generators provide energy. This first step aims to integrate the panels and the HP with a CHP micro gas turbine (mGT) -and governing the energy share via an MPC.
MPC description
The MPC is a control-based architecture that utilizes an explicit model to predict the future response of a system. This is achieved by a prediction horizon based on preceding and future actuator commands. Implemented for many applications [11], the application of MPC in the field of the smart grid has risen consistently in last decade [12].
MPC implementation and results
The implemented MPC strategy consists in the control of the mGT and the Heat Pump in order to match the electrical and thermal loads respecting the intrinsic limitations of the system. The setpoints are given based on the loads applied in Matlab/Simulink representing two days for hot and cold season. The controller gives the electrical power required from the mGT (PmGT) and the heat required from the HP (Qcond) as outputs through a predictive calculation developed on a state-space system (representing the real model) implemented inside the controller. The mGT model through correlation curves elaborates the thermal power generation related to the electrical one asked by the controller, providing to the system both energy sources. The MPC regulates the heat provided by the heat pump acting on the mass flow rate from the DHN, while the PI control inside the HP guarantees the desired outlet temperature. The temperature from the façade panels circuit is implemented as a measured disturbance since it is a parameter that affects the HP performances without that it can be controlled by the MPC. The relevance of thermal loads has been considered more significant than the electrical one since that in this configuration no thermal storages are considered. Therefore, the MPC weights have been tuned in respect of these system requirements. The results provided by the simulation on a summer day are reported in the following figures. The summer period is reported as significative example, since the period where the façade panels are supposed to have a stronger impact on the system. The thermal demand is well covered with the chosen controller despite the electric one is basically a consequence of the thermal management ( Figure 5 and Figure 7). The use of a thermal storage can be considered to guarantee a better covering for both demands. Figure 6 provides a detailed view about the thermal production considering separately the mGT and the Heat Pump. Figure 8 shows the management provided by the MPC in order to guarantee the energy productions showed in Figure 6. The parameters controlled by the MPC are PmGT for mGT and Qcond for the HP. The mGT size chosen is of 100 kWe and has a minimum cut-off of 20 kW, while the HP has a maximum heat power of 20 kWth: Figure 8 confirm how the MPC respects these constraints implemented in the controller.
Conclusions
In this work the solar façade panels and HP models have been implemented and integrated each other. Different scenarios (considering mass flow through the panels and different interconnections between the panels) have been simulated in order to choose the configuration that leads to the best system performance. Subsequently an MPC control has been implemented in order to properly manage the system (in its best configuration) considering the energy requests from the DHN. It is worth noting that the consumption related water-glycol pump (inside the panels circuit) has been neglected as the pressure drop over the panels has not been evaluated in the preliminary tests. A first demo will be installed in Savona Campus starting from June 2019 that will allow pressure drops to be experimentally determined and are expected to be negligible. In the near future it will be interesting to perform a more detailed system management that considers also economic aspects: energy cost along the day will be implemented and a cost function to minimize will be inserted in the control strategy. A thermal storage will be experimentally installed in order to make it more independent of thermal and electrical demands, giving more flexibility to the whole system. | 3,126.2 | 2019-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Telomerase Reverse Transcriptase (TERT) Expression, Telomerase Activity, and Expression of Matrix Metalloproteinases (MMP)-1/-2/-9 in Feline Oral Squamous Cell Carcinoma Cell Lines Associated With Felis catus Papillomavirus Type-2 Infection
Telomerase activity contributes to cell immortalization by avoiding telomere shortening at each cell division; indeed, its catalytic subunit telomerase reverse transcriptase (TERT) is overexpressed in many tumors, including human oral squamous cell carcinoma (hOSCC). In these tumors, matrix metalloproteinases (MMPs), a group of zinc-dependent endopeptidases involved in cell migration, contribute to invasive potential of cancer cells. A proportion of hOSCC is associated with infection by high-risk human papillomavirus (HR-HPVs), whose E6 oncogene enhances TERT and MMPs expression, thus promoting cancer progression. Feline oral squamous cell carcinoma (FOSCC) is a malignant tumor with highly invasive phenotype; however, studies on telomerase activity, TERT, and MMPs expression are scarce. In this study, we demonstrate telomerase activity, expression of TERT, and its transcriptional activator cMyc along with expression of MMP-1, -2, and -9 in FOSCC-derived cell lines SCCF2 and SCCF3, suggesting a contribution by these pathways in cell immortalization and invasion in these tumors. Recent studies suggest that a sub-group of FOSCC as well as SCCF2 and SCCF3 are associated with Felis catus PV type-2 (FcaPV-2) infection. However, in this work, FcaPV-2 E6 gene knock-down caused no shift in either TERT, cMyc, or MMPs levels, suggesting that, unlike its human counterpart, the viral oncogene plays no role in their regulation.
INTRODUCTION
Telomerase is a ribonucleoprotein enzyme complex whose main function is to extend telomeric DNA by adding repetitive sequences of six nucleotides (5 ′ -TTAGGG-3 ′ ) at telomere ends (1). Telomerase reverse transcriptase (TERT) is the catalytic subunit and its activity consists in adding this six-nucleotide repeat using the RNA template (TR) included in the holoenzyme (1). TERT is not expressed in somatic cells (1). As a consequence, telomeric DNA is shortened at each cell division until reaching a critical point of erosion after a programmed number of cellular replications: this event is sensed by the cell machinery as severe DNA damage, leading to replicative senescence and induction of apoptosis. Therefore, telomerase activity is prominent in cells that must keep a high proliferative potential, such as stem cells and, importantly, neoplastic cells (1).
Indeed, TERT is expressed in most cancers, including human oral squamous cell carcinoma (hOSCC) (2). In these tumors, telomerase activity contributes to cellular immortalization, playing a key role in neoplastic process and representing a marker of poor prognosis (3,4). A sub-group of hOSCCs, localized at oropharyngeal sites, are believed to be caused by high-risk human papillomavirus (HR-HPVs) infection, particularly HPV type 16 (HPV-16) (5). HPV-16 may contribute to activation of telomerase through viral oncoprotein E6, which is able to enhance TERT expression and increase enzymatic activity by different mechanisms such as promoter activation or epigenetic or post-transcriptional regulation (1).
cMyc is a well-known oncogene that contributes to tumorigenesis in different manners, such as regulation of genes involved in cell proliferation and apoptosis or mediating genomic instability (6,7). It is overexpressed in many tumors, including hOSCC (8). Among the most relevant functions of cMyc, transcriptional activation of TERT gene is of great importance in neoplastic process, particularly in HPV-related cancer (9). Indeed, HPV-16 E6 is able to induce expression of cMyc, which, in turn, activates TERT promoter, thus contributing to cell immortalization (9).
Matrix metalloproteinases (MMPs) are a group of zincdependent endopeptidases that degrade basal membrane (BM) and extracellular matrix (ECM) (10). Most of the MMPs are synthetized as pro-active forms that need protease cleavage to become functional and exert their proteolytic activity toward connective tissues (10). This function is necessary for cells during several physiological processes, such as inflammation, embryogenesis, and wound healing; however, it is relevant also in pathological conditions, such as cancer (10). Degradation and remodeling of BM and ECM by cancer cells is a key step promoting invasiveness and metastasis; therefore, MMPs are expressed in many tumors, including hOSCC (10,11). For instance, MMP-1, MMP-2, and MMP-9 (also known as collagenase, gelatinase A and B, respectively) are expressed and involved in cell migration and invasion as well as in malignant progression in hOSCC (12)(13)(14). In the case of HPV-related tumors, E6 and E7 oncogenes may play a role in inducing MMP-1, MMP-2, and MMP-9 expression, thus contributing to invasive phenotype of cancer cells (15)(16)(17).
In the field of pet oncology, TERT expression and telomerase activity have been described in many canine and feline tumors but poorly investigated in OSCC (18). In cats, OSCC is among the most common malignancies (19). It is characterized by highly malignant behavior, with frequent occurrence of local invasion and metastasis, high rate of recurrence, and poor prognosis (19). Feline OSCC (FOSCC) is considered a spontaneous animal model of human counterpart, since several biological properties are shared between tumors of the two species, including activation of cancer-related molecular pathways and prognostic markers (19,20). However, there is only one report describing telomerase activity in FOSCC samples and studies regarding expression of TERT and its correlation with enzymatic activity in these types of cancer are lacking, particularly in tumor-derived living cells (21). Similarly, despite their high invasive potential, expression of MMPs in FOSCC has never been investigated so far.
The aim of this study was to assess telomerase activity, expression of TERT, cMyc, MMP-1, MMP-2, and MMP-9 in FOSCC cell lines associated with FcaPV-2 infection. Moreover, the possible involvement of FcaPV-2 E6 in their regulation has been checked by viral gene knock-down approach.
Cells and Cell Culture
Cervical carcinoma Hela cells were purchased at ATCC cell bank. Feline oral squamous cell carcinoma cell lines SCCF2 and SCCF3 developed in the Rosol laboratory are a kind gift from Professor T. J. Rosol (The Ohio State University). SCCF2 have been obtained from gingival SCC with bone invasion, and SCCF3 have been obtained from a tongue lesion. Cells have been cultured as previously described (27)(28)(29).
Telomeric Repeat Amplification Protocol (TRAP) Assay
Cells were plated in six-well-plates at 1 × 10 5 density and harvested by trypsinization after 48 h. Telomerase activity in cells was assessed by a TRAP assay by using TRAPeze R Telomerase Detection Kit (Merck #S7700) following the manufacturer's protocol. Briefly, cell pellets were homogenized in CHAPS lysis buffer, incubated on ice for 30 min and centrifuged at 13,000 g for 20 min at 4 • C. Supernatants were collected and the protein concentration was measured by Bradford assay (Bio-Rad Laboratories). The same amount of protein lysate (1.5 µg) was added to the reaction mixture (50 µl) and subjected to telomerase activation and amplification of telomerase products following the PCR protocol provided by the brand. Amplification products were separated by electrophoresis on 15% polyacrylamide nondenaturing gels. Gels were stained with GelStar TM Nucleic Acid Gel Stain (Lonza, #50535) and the ChemiDoc gel scanner (Bio-Rad) equipped with a densitometric workstation (Image Lab software, Bio-Rad) was used for quantification. Telomerase activity was calculated as the ratio between the telomerase ladders and the 36-base pair internal control.
RNA Extraction, Reverse Transcription (RT), and Real-Time Quantitative PCR (qPCR)
Cells were subjected to RNA extraction by using the RNeasy Mini Kit (Qiagen #73504) followed by DNase digestion (Roche, #04536282001) according to manufacturers' recommendations. For each sample, 1 µg of RNA was subjected to RT by using iScript cDNA Synthesis Kit (Bio-Rad Laboratories, #1708890); RT without the addition of reverse transcriptase enzyme was also performed on each RNA sample as control. Real-time qPCR was performed on 50 ng of the obtained cDNAs by using iTaq Universal SYBR Green Supermix (Bio-Rad Laboratories, #1725121) according to the brand instructions, employing the primers for feline TERT and FcaPV-2 E6 detailed elsewhere (31,32). The following primers for feline cMyc were designed based on the gene sequence previously published: cMyc-FW: 5 ′ -CAAAAGGTCGGAATCGGGGT-3 ′ , cMyc-REV: 5 ′ -CGTGGCATCTCTTAAGGACCA-3 ′ (33). Feline MMP-1, MMP-2, and MMP-9 genes were amplified by using the primers described previously (34)(35)(36). Amplification of feline β-2-microglobulin (β2MG) was performed in parallel to allow normalization of the results as previously reported (37). Bio-Rad CFX Manager software was used to generate gene expression data based on the 2 − Ct method, and SCCF2 were arbitrarily set as control and put as 1.
Immunofluorescence Staining
Cells grown for 48 h on coverslips were washed, fixed, permeabilized, and subjected to background blocking as previously reported (30). Primary antibodies anti-TERT, anti-MMP-2, and anti-MMP-9 at 1:50 dilution in PBS were applied for 2 h at rt in a humid chamber. Anti-MMP-1 antibody does not work for immunofluorescence. Then, slides were washed three times for 10 min in PBS and incubated with Texas Red Alexa Fluor goat anti-rabbit (Thermo Fisher Scientific #A11030) and Alexa Fluor 488 goat anti-mouse (Thermo Fisher Scientific #A11001) for 30 min at rt in a humid chamber at 1:100 dilution. Finally, after washing with PBS, the slides were mounted in aqueous medium PBS:glycerol 1:1 containing DAPI (1:1,000) to allow nuclear counter-staining. Slides were read under ZOE Fluorescent Cell Imager (Bio-Rad Laboratories) for scanning and photography.
FcaPV-2 E6 Gene Knock-Down by siRNA
FcaPV-2 E6 gene silencing was achieved by using three customsynthetized siRNA oligonucleotides (Silencer@ Select, Ambion #4399666) as previously described (24). SCCF3 cells were plated at 2 × 10 5 density in six-well-plates and, after 24 h, a siRNA oligonucleotides pool or scrambled RNA (Ambion #4390843) at 50 nM were transfected by using Lipofectamine 2000 (Thermo Fisher Scientific #11668027) according to the standard protocol. Cells were harvested after 48 h and analyzed by WB and qPCR.
Statistical Analysis
For statistical analysis, Student's t-test was performed using SPSS 17.0 software (SPSS Inc., Chicago, ILL, USA) and differences considered statistically significant for * P < 0.05 or * * P < 0.01.
Expression of TERT and cMyc in FOSCC Cell Lines
Expression of telomerase catalytic subunit TERT is crucial for enzymatic activity (1). Therefore, expression of TERT and its transcriptional activator cMyc were investigated at gene and protein levels in SCCF2 and SCCF3 by real-time qPCR and WB, respectively (Figure 1). TERT and cMyc cDNA were successfully amplified in both cell lines by qPCR; relative quantization analysis revealed lower TERT gene expression but higher cMyc relative mRNA levels in SCCF3 compared to SCCF2 (Figures 1A,B). The data obtained by WB followed by densitometric analysis yielded consistent results, showing lower TERT protein expression but higher cMyc protein amounts in SCCF3 with respect to SCCF2 (Figures 1C-E). Hela whole cell lysate run along with feline samples as antibody control confirmed the identity of the band (38). All the experiments were repeated at least three times yielding comparable results and the differences detected between the SCCF2 and SCCF3 cell lines were statistically significant (t-test).
Telomerase Activity in FOSCC Cell Lines
Then, telomerase activity was investigated by TRAP assay in SCCF2 and SCCF3 (Figure 1F). Results from four repeated, Mean densitometric values ± SD from at least three repeated, independent WB experiments, normalized for tubulin expression (statistically significant, *P < 0.05 and **P < 0.01). (F) Telomerase activity was detected in SCCF2 and SCCF3 by telomeric repeat amplification protocol (TRAP) assay. Hela cell lysate was run along with feline samples as positive control. A representative gel out of four independent experiments, showing lower telomerase activity in SCCF3 vs. SCCF2 is illustrated (C-: negative control, sample with no lysate; L100bp: 100 base pairs DNA ladder, the first band from the bottom is 100 bp). (G) Quantification of telomerase activity in SCCF2 and SCCF3 by densitometric analysis expressed in % relative to SCCF2. Data were calculated as the ratio between TERT products ladder and 36 bp internal standard and represent the mean ± standard deviations (SD) of four repeated, independent experiments (statistically significant, **P < 0.01). independent experiments showed that telomerase was active in both cell lines, with lower enzymatic activity in SCCF3 with respect to SCCF2 ( Figure 1G). As expected, telomerase activity was detected also in Hela used as positive control but not in the sample with no lysate run as negative control [ Figure 1F; (38)].
By WB, protein bands of the investigated MMPs were detected in both cell lines ( Figure 2D) and, consistently with gene expression data, densitometric analysis confirmed lower MMP-1 and higher MMP-2 expression in SCCF3 compared to SCCF2 (Figures 2E,F). Surprisingly, MMP-9 was detected at higher protein levels in SCCF3 with respect to SCCF2 in contrast with gene expression results ( Figure 2G). The identity of the bands was confirmed in Hela cell lysate run as antibodies control (39). The experiments were repeated in triplicate and the differences yielded by densitometric analysis were statistically significant as revealed by t-test. Boxes are cut from the same gel at the same exposure time and properly aligned according to molecular standards loaded onto the gel. Full scans from original gels are shown in Supplementary Figure 3. (E-G) Quantitative analysis expressed as mean densitometric values ± standard deviations from at least three independent experiments revealed lower MMP-1 levels and higher MMP-2 and MMP-9 protein amounts in SCCF3 compared to SCCF2. Protein bands were normalized for β-actin expression (statistically significant, *P < 0.05 and **P < 0.01).
Sub-cellular Localization of TERT and MMPs in FOSCC Cell Lines
Sub-cellular localization of TERT is functionally relevant for telomerase activity; therefore, SCCF2 and SCCF3 were subjected to double IF staining for TERT, MMP-2, and MMP-9, along with Hela cells to ensure the reactivity of the antibodies [ Figure 3; (40)]. TERT (red staining) was localized mainly in the nuclei in all the cell lines, as judged by the merge with DAPI blue labeling. In SCCF2, SCCF3, and Hela cells, TERT staining showed three different localization patterns: diffused to the whole nuclear area, compartmentalized in proximity of the nuclear membrane or with dot-like spots (Figure 3).
Effects of FcaPV-2 E6 Knock-Down on TERT, cMyc, and MMP Expression
FOSCC cell lines employed in this study had been shown to express FcaPV-2 E6 oncogene, particularly at higher levels in SCCF3 (24). In order to investigate whether expression of TERT, cMyc, and MMPs might be dependent on expression of this viral oncogene, SCCF3 were subjected to E6 knock-down by siRNA, followed by WB and densitometric analysis to evaluate changes in their protein levels. Moreover, given that FcaPV-2 E6 is known to degrade p53, cells were concomitantly analyzed by WB with anti-p53 antibody to evaluate p53 rescue and ensure the reliability of the procedure (23,24). As expected, qPCR confirmed knock-down of E6 expression along with accumulation of p53, as revealed by WB (Supplementary Figure 1). However, no shift in protein expression of TERT, cMyc, MMP-1, MMP-2, and MMP-9 compared to scramble-treated cells could be appreciated ( Figure 4A). Densitometric analysis confirmed these results ( Figure 4B).
DISCUSSION
Telomerase is an enzyme that contributes to immortalization in cancer cells (1). Its catalytic subunit TERT is overexpressed in many human and animal tumors; however, studies regarding expression of TERT and telomerase activity in FOSCC are scarce (18,31). Here, we show, for the first time, TERT expression and telomerase activity in FOSCC-derived cell lines SCCF2 and SCCF3, in agreement with a unique study reporting functional enzymatic activation in FOSCC samples: this suggests that telomerase may play a role in neoplastic process in these tumors as shown in human and canine counterpart (2,3,21,41). In cancer, different factors may influence the degree of telomerase activity, such as expression levels of TERT, TERT sub-cellular localization, or tissue origin of the lesion (40,42). TRAP assay showed that the levels of telomerase activity were correlated with respective TERT mRNA and protein amounts in each cell line: this may indicate that the activation status of the enzyme is mostly dependent on expression levels of its catalytic subunit in FOSCC, differently from other types of cancer where FIGURE 3 | Sub-cellular localization of TERT and MMP-9 in SCCF2 and SCCF3. Cells were grown on coverslips and subjected to double indirect IF staining for TERT (red fluorescence) and MMP-9 (green fluorescence). Nuclei were counterstained with DAPI. Inset shows higher magnification of merge panel. TERT was localized in the whole nuclear area (long arrow), compartmentalized in proximity of nuclear membrane (short arrow) or in dot-like spots (small arrowhead). MMP-9 was expressed in the cytoplasm (large arrowhead). Hela were stained to ensure antibody reactivity. it may be influenced also by post-translational events (43). It has been shown that telomerase exerts lower enzymatic activity when TERT is localized in proximity of nuclear membrane or compartmentalized in nuclear spots, while a nuclear diffuse expression pattern is associated with higher functional activation (40). In this study, IF staining showed all the aforementioned staining patterns concomitantly in both cell lines, suggesting that telomerase activity is not correlated with TERT sub-cellular localization in FOSCC. These diverse intra-nuclear locations are possibly related with different steps of complete telomerase assembly; however, it is not fully understood how TERT localizations may affect levels of telomerase activity (40). In hOSCC, tongue location is associated with higher telomerase activity with respect to gingival lesions (42). This might be not the case of FOSCC, since lower telomerase activity was detected in tongue SCCF3 vs. gingival SCCF2 cells, suggesting a biological discrepancy with human counterpart. Further studies are needed to clarify this point.
cMyc is a potent oncogene that is overexpressed in many tumors (6,7). Its key role in cancer progression is confirmed by the fact that expression levels of cMyc are closely correlated with chemotherapy resistance in different types of malignancies (44). This might be plausible also in FOSCC, since SCCF3 harboring higher cMyc amounts had displayed lower sensitivity to several chemotherapeutics with respect to SCCF2, as described in a recent work (45). A possible role of cMyc in FOSCC development was further suggested in an elder report showing its overexpression in tumor samples, consistently with our results (46).
In humans, a sub-group of OSCC is associated with HR-HPV infection (5). In these tumors, HPV-16 E6 oncoprotein switches on immortalization pathways, among these telomerase activity (1). The most relevant mechanism of telomerase activation by HPV-16 E6 is the augmentation of TERT gene expression through multilayered functions (1). For instance, HPV-16 E6 induces overexpression of cMyc, which is a transcriptional activator of TERT promoter (9). Recent studies demonstrate that a proportion of FOSCCs is associated with FcaPV-2 infection; importantly, FcaPV-2 biological activity has been previously revealed also in SCCF2 and SCCF3, with this latter cell line harboring higher E6 mRNA amounts (24,26). In this study, expression levels of cMyc, but not TERT, appeared to be correlated with those of E6 in SCCF cells; at first glance, these data suggested that the viral oncogene may up-regulate cMyc as in human counterpart, but this would not contribute to cell transformation through induction of TERT expression in FcaPV-2-related FOSCC. However, E6 siRNA did not affect neither cMyc nor TERT protein amounts, indicating that their expression was independent from the viral oncogene. It has been shown that TERT expression may be induced by promoter mutations independently from the HPV status in hOSCC and cervical cancer; whether this may occur in FOSCC has to be further investigated in future studies, in order to shed light on the mechanisms leading to TERT expression and telomerase activation in these tumors (47).
MMPs are proteolytic enzymes produced by cancer cells to digest ECM and BM in order to promote local invasion and metastasis (10). FOSCCs are characterized by high invasive behavior, particularly at bone level (19). SCCF2 and SCCF3 showed expression of MMP-1/-2/-9 at gene and protein level, suggesting that they might contribute to invasive potential of FOSCC cells, in agreement with studies on hOSCC that demonstrate that these MMPs play a relevant role in determining the invasive phenotype of tumor cells (11). In the case of MMP-9, the inconsistency between gene and protein expression data could be due to well-known post-transcriptional and/or posttranslational regulation mechanisms that can cause discrepancy between the steady-state levels of its mRNA and protein (48). A previous work reports that SCCF2 cells display major osteolysis and bone invasion with respect to SCCF3 (29). Therefore, higher gene and protein expression of MMP-1 in SCCF2 found in the current study might indicate that it may be the main factor responsible in influencing bone invasiveness in FOSCC. MMP-2 and MMP-9 are classified as gelatinases A and B, respectively (11). MMPs belonging to this class are produced as zymogens that are stored in the cytoplasm to be released and activated in the extracellular environment (11,49). Thus, MMP-2 and MMP-9 cytoplasmic staining revealed by IF in this study is likely to represent their intracellular storage in SCCF2 and SCCF3 and is consistent with the scenario described in cell lines derived from hOSCC (50). In veterinary oncology, overexpression of MMP-2 and MMP-9 has been reported in equine sarcoid associated with bovine PV infection (51). Moreover, HR-HPV types associated with hOSCC may enhance expression of MMP-2 and MMP-9 through the molecular activity of E6 oncogene, thus promoting invasiveness (16). In our study, higher amounts of MMP-2 in SCCF3 suggested a possible functional correlation with FcaPV-2 E6 expression; however, siRNA for viral gene transcript did not affect MMPs protein levels, suggesting a biological difference with other PV-related tumors. Other viral-independent pathways have been shown to induce MMP-1, MMP-2, and MMP-9 expression in hOSCC, among these intracellular signaling activated by proinflammatory cytokines (11,50). Recent studies are highlighting a possible role of inflammation also in FOSCC progression; therefore, this field of inquiry should be deepened in future studies (52,53).
Previous studies revealed that SCCF2 and SCCF3 are highly representative of spontaneous tumors regarding invasiveness and cancer-related pathways (29,54). Therefore, expression of TERT and cMyc, telomerase activity, and MMP-1/-2/-9 demonstrated in this study confirm the reliability of these cell lines as a valuable preclinical model of FOSCC. FcaPV-2 E6 seems to play no role in their regulation, differently from its human counterpart; however, a possible involvement of other putative viral oncogenes cannot be excluded. Further studies are needed to clarify the role of TERT and MMPs in FOSCCs, as well as the possible involvement of FcaPV-2 in immortalization and invasion pathways in these tumors.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
AUTHOR CONTRIBUTIONS
GA, MM, and LL performed the experiments. GA, MM, PM, and GB conceived the study and drafted the manuscript. | 5,153.4 | 2020-03-27T00:00:00.000 | [
"Biology"
] |
An Indirect Evaluation Method of Mold Coating Thickness in AlSi Alloy Permanent Mold Casting Production
Permanent mold casting produces the second most aluminum cast parts among all casting processes. In this process, mold coating changes the heat transferred from the molten metal to the mold by acting as an insulating layer. Moreover, the coating thickness is a significant variable regarding the coating’s thermal resistance, which strongly influences the microstructure of cast parts and the thermal shock on expensive molds. However, in casting production, coating peeling-off and repeated recoating result in an inhomogeneous coating thickness distribution. Due to the high working temperatures of the molds, no efficient online coating thickness measurements exist. We propose an indirect evaluation method based on the as-cast surface corresponding to the coating area. Our experiments analyzed as-cast and coating surfaces at nine different coating thicknesses. The results show a close correlation between the as-cast surface roughness parameter of arithmetical mean height Sa and the coating thickness. Based on this correlation, we can derive the coating thickness from the corresponding as-cast surface analysis. Furthermore, the coating peel-off area and other casting surface defects are easily recognized in these surfaces. In our next work plan, an affordable optical camera and proper light conditions will be tested by taking photos of as-cast surfaces, and an algorithm for the real-time automatic evaluation will be developed.
Introduction
The global aluminum casting market has been growing because aluminum parts are lightweight and offer high corrosion resistance. 1 Among all casting processes, permanent mold casting produces the second most parts, following the high pressure die casting. 2 Three main influence inputs exist for the permanent mold casting process: molten metal, metallic mold, and mold coating. Mold coating is a layered material for permanent mold. This temporary layer protects the mold from chemical, thermal and mechanical shock from the molten metal. 3 In the foundry coating review of Nwaogu et al., mold coating usually contains five components: refractory filler, binder agents, suspension agents, additives, and a liquid carrier. 4 Molten metal and metallic mold are usually prepared well using a machine. Only the mold coating is primarily sprayed and checked by workers.
Apart from the protection function, mold coating has multieffects on the cast part quality and production cost, as shown in Figure 1. The works of Hamasaaid et al. and Hallam et al. proved a close relationship between coating thickness and thermal resistance for white and graphite coating. 5,6 In other words, a thicker coating can be resistant to more heat flux when the coating type is specific. There is a close relationship between heat transfer coefficient and thermal resistance. 7 Therefore, the transferred heat changes the temperature distribution of the mold and the metal. 8 For the mold, when the coating is much thicker than the initial status, the cooling time must be longer to balance the extra thermal resistance. In this way, production efficiency decreases because of the longer cycle time. In contrast, when the coating is much thinner than in the initial status or peels off, the thermal shock on the mold will be so strong that the lifetime of these expensive molds will decrease. Moreover, the cooling rate has an essential effect on solidified microstructure and mechanical properties of aluminum alloy for gravity die casting and high pressure die casting. 9,10 When the heat changes the temperature distribution of the metal, the metal flow action will also be influenced. 2,11 Meanwhile, the surface modification brought by mold coating is another significant factor in flow behavior. The oxide film of the flowing melt can be easily fixed on the relatively rough-coated mold surface, which increases the melt's flow rate and decreases the oxide inclusions in the cast parts. 12 This influence leads to risks in multiple aspects: dimensional accuracy, microstructure, and the mechanical properties of cast parts. Additionally, mold coating has other functions in casting production, such as preventing soldering to the mold and venting air trapped in the mold cavity. 3,13 Based on the above introduction, we can tell how coating status influences the quality of cast parts and the lifetime of expensive casting molds. Although the mold coating status plays an important role, the coating adhesion is relatively low, so the coating tends to peel off the mold after several casting cycles. (Figure 2) After some production hours, the coating thickness distribution is far from the initial status.
This lack of device monitoring poses risks in multiple aspects, e.g., unstable cast part quality management and low production efficiency. However, these risks have been neglected for decades in foundries due to the inability of the coating thickness gauges. So far, most workers have monitored the coating thickness status only visually, according to their experience. 14
Description and Comparison of Mold Coating Thickness Gauges
Generally, two typical offline mold coating thickness gauges base on magnetic and ultrasonic systems, which need to contact tightly with the substrate. However, the measurements of a porous coating based on these two systems can lead to a high deviation in the results. 14 The big deviation of 53 lm for the coating thickness of approx. 300 lm is also observed in our experiment, as shown in Figure 3. Thus, these two systems are more acceptable for the low porous thin film coating measurement on relatively smooth surfaces, such as lacquer on car bodies and painting on a metallic substrate. Additionally, other non-contact online and mobile gauges with ultrasonic, laser, optical light, or eddy-current are also applied for thin coating thickness measurement. [15][16][17][18] However, these gauges have strict requirements for the coating substrate geometry or coating transparency. Moreover, most thickness gauges cannot work at a high temperature properly. However, as is known to most casters, the mold is hot, usually above 200°C. Even some measurement system can work on hot surface, such as magnetic system, it is still difficult for workers to use it close on the hot mold in the aspect of safety and the stable control. Moreover, the coating thickness distribution instead of coating thickness at one specific position strongly influences on the general heat transfer behavior between the coated mold and the molten metal due to the high-temperature conductivity of the mold and molten metal. 19 Therefore, a quick evaluation of coating thickness distribution at a high working temperature is essential for stable cast part quality management and a high lifetime of expensive molds.
During the casting production filling process, the melt frontier immediately solidifies by contacting the coating area. Vossel et al studied one model of the interface between melt and mold. 20 The melt frontier partly rubs against the coating topography, which builds up the as-cast surface. Therefore, the as-cast surfaces can provide a great deal of helpful information about the coating status on the hot mold for permanent mold casting process and that on the sand mold for sand casting process. 21,22 In our work, we produced two cast parts for nine different coating thicknesses to analyze the coating and the corresponding as-cast surfaces with laser scanning confocal microscopy. The results demonstrate a close correlation between the ascast surface and the coating thickness. Therefore, we propose an online evaluation method for mold coating thickness based on as-cast surfaces corresponding to the coating area. Figure 4 shows the configuration of the designed mold with inserts made of tool steel H11, and this standard steel information can be referred to the data sheet. 23 Figure 5 illustrates the position and spray contact angle of the six inserts. In our experiment, the coating spraying contact angles at different insert positions varied due to the limited distance between the mold parts, which is similar to the actual casting production process. Insert 1 and insert 4 have the biggest spray angle, of 44°; insert 2 and insert 5 oppose the middle angles of 37.6°; and insert 3 and insert 6 have the smallest angles, of 35.3°(when the coating is sprayed from the specific position). In addition, three heating elements for each mold part are connected to a hotcontrol cDT?06(K) temperature controller made by Hotset GmbH. The mold temperature is measured with K-type thermocouple in the middle of mold inserts.
Experimental Process
This work produced nine different coating thicknesses by means of spraying with a certain spray gun, from approx. 50 lm to approx. 400 lm and covering the typical coating thickness range for cast part cavity. 14 Some other areas, such as riser or gating, are not discussed in this work. For each coating thickness, we cast two parts to minimize the measurement error. Figure 6 illustrates an overview of our experimental process.
Chemical Composition of AlSi Alloy
Before each casting, the molten metal chemical composition is analyzed using a Foundry Master Pro2 optical emission spectrometer made by Hitachi. The average values of each element and their ranges are listed in Table 1.
Sampling of Coating and as-Cast Surfaces
CILLOLIN AL 286 insulating mold coating (Schäfer Metallurgie GmbH) is tested in our experiment. For this coating product, it has the solid particles content of 60%, the water liquid carrier of 39.6%, and little content of the sodium silicate binder agent solutes in water. Other detailed chemical compositions cannot be published because of the trade secret.
Firstly, this coating is mixed with water, whose mixing rate is 1:1. Afterward, the mixed coating is sprayed by the same person with the same spray tool onto the mold when the mold is heated to 300°C. After spraying the coating, the internal heating elements are shut down, and the mold is cooled with air to room temperature. The inserts are disassembled and used as coating samples. After analyzing the coating samples, the mold inserts are assembled back into the mold. Because the alloy cast has an essential effect on the as-cast surface regarding alloy type and melt temperature. 24 The chemical composition and the molten metal temperature are controlled within a fixed range. After the melt temperature arrives at 750°C ± 5°C and the mold temperature to 300°C ± 5°C, the AlSi alloy melt is cast into the mold. After approx. five minutes, the cast part is ejected. Afterward, the riser and in-gate of cast parts are removed. Then the heating up of the mold is shut down. The as-cast surfaces for analysis are then at the exact position of the inserts in the lower row, which has a more stable surface status than the upper row. These surface areas are marked using a laser machine, as shown in Figure 7.
Sampling Analysis
The
Results and Discussion
Correlation Between as-Cast Surface Roughness and Coating Thickness Figure 8 shows the surface roughness parameter Sa of the coating surfaces and the as-cast surfaces at various coating thicknesses. The left panel of six diagrams stands for the surface roughness Sa of six individual inserts, and the right diagram is for the average Sa at average coating thickness. Figure 8a shows the influence of the contact angle on the increasing rate of surface roughness Sa for both coating surfaces and as-cast surfaces. The various insert positions result in different contact angles between the mold and the spraying material. The coating and as-cast surfaces tend to be smoother when at a larger contact angle for the same coating spray time. Moreover, a larger contact angle result in a thinner coating. Significantly, the coating thickness at a contact angle 35.3°is much lower than at the other two. That can be explained by relatively little spray coating material having arrived at the more distant positions due to Gauss distribution of coating thickness. The average Sa of coating surfaces and as-cast surfaces also increases with an increasing coating thickness, as can be seen in Figure 8b. Additionally, the Sa tendencies for both surfaces with a distance. The coating surface roughness cannot be 100% mapped by the corresponding as-cast surface due to the surface tension of the Al alloy melt. However, the similar tendencies of coating and as-cast surface roughness show that the coating surface is to some degree decisive for the as-cast surface.
When the initial mold coating is so thin that the initial coating cannot cover the mold surface completely, the adhesion of the melt on the mold surface is stronger than the thicker coating due to the high reactivity between Al alloy and tool steel. Therefore, the corresponding as-cast surface is rougher for the initial coating. However, when the mold is completely covered with coating material after two recoating times, the coating thickness plays the most significant role. Therefore, we remove the data before the second recoating time. Moreover, the crystal water will be evaporated from the binder agent of sodium silicate into the as-cast surface during the first casting. This can change the as-cast surface with fine pores. However, the impact is hardly recognized at the second casting. Thus, we take the data of the second cast part to analyze the resulting as-cast surface and find a linear correlation between the as-cast surface roughness Sa and coating thickness d. In Figure 9, the formula equation of the fitted curve can be derived: S a ¼ 0:047d þ 5:362 Eqn: 5 where the constant of 5.362 refers to the theoretical as-cast surface roughness Sa without mold coating. Moreover, the constant of 0:047 has a unit of 1 lm . Figure 10 shows the surface roughness parameter Sz of the coating surfaces and the as-cast surfaces at various coating thicknesses. The left panel of six diagrams stands for the surface roughness Sz of six individual inserts, and the right diagram is for the average Sz at average coating thickness. There is also a close correlation between coating surface roughness Sz and coating thickness. Sz can be another proper surface parameter for as-cast surface evaluation. However, the texture parameters Spc and Sdr of as-cast surfaces stay at a relatively constant value at different coating thicknesses, as shown in Figure 11 and Figure 12. It is difficult to use these two parameters for analyzing irregular coating and as-cast surfaces.
Moreover, the influences of the casting cycle on the surface roughness Sa and Sz of coating and as-cast surfaces are also analyzed. Figure 13 shows there are no prominent differences of as-cast surface status among the first five casting cycles. The coating surface roughness decreases obviously after the first casting. From the second casting, the coating surface is stable. The reduction of coating surface roughness during the first casting can be explained with the sintered binder agent sodium silicate, which smooths the gaps between the ceramic particles.
3D-Scan Topography Photos of Coating and as-Cast Surfaces
Most surface roughness definitions cannot evaluate as-cast surfaces precisely due to their irregular and random pattern. 28,29 Additionally, the mold surface is similar after sand blasting process. Therefore, the 3D topographies of the coated mold surfaces and the corresponding as-cast surfaces are obtained by means of the laser microscopy and compared. Figure 14 shows 3D-scan topography photos of uncoated mold inserts with the corresponding surface roughness Sa. The cast part would be stuck to the mold surface when the mold is uncoated. Thus, there is no casting experiment for uncoated mold. Figure 15 shows a series of topography photos corresponding to the insert 1 to 3 position after the second casting at nine different coating thicknesses. These figures show that the relatively rough particles become easily connected and sintered together. These aggregates become more prominent with a thicker coating, i.e., a longer coating spray time. The corresponding as-cast surfaces have more significant gaps in the web-like topography. However, the as-cast topography tends to be smoother than the coating surface due to the high surface tension of the melt. For the small contact angle of insert 3, the changing tendency shows more prominent. This might explain the faster increasing tendency of surface roughness Sa with an increasing coating thickness seen in Figure 8. Moreover, Figure 16 easily tells the position and size of the coating peeling-off area with the gray laser intensity photo of as-cast surfaces.
Optical Micrographs of Near-Surface as-Cast Microstructure Figure 17 shows the changing tendency of the as-cast microstructure of casting specimens with increasing re- coating times. The unmodified microstructure consists mainly of a-aluminum, acicular silicon and primary silicon crystals. 28 The average near-surface microstructure tends to be coarser with an increasing coating thickness. That can be explained by the additional thermal resistance at an increasing coating thickness. Another potential effect of surface roughness of coated mold on the heat transfer behavior should also be taken into consideration. 30,31 However, the coarse microstructure of the cast part corresponding to the initial coating does not fit the tendency. One hypothesis is that the air gap between the coated mold and the as-cast surface is bigger for the initial coating than other coating thicknesses, which can be supported with a higher as-cast surface roughness at the initial coating. The bigger air gap may result into a higher thermal resistance of the initial coating.
Brinell Hardness for Different Coating Thickness
The Brinell hardness measurement is carried out with a load of 25 kgf and an indenter of 5 mm diameter. Three positions are located separately on the edge near to insert 5, middle, and the edge near to insert 2. The Brinell hardness profiles of cast parts casted into mold for different coating thicknesses are shown in Figure 18. The decreasing tendency of hardness values is observed for three measured positions with increasing coating thickness; this has an agreement with the observation of the metallography microstructure photos.
Conclusion
This paper focuses on the correlation between as-cast surfaces at various coating thicknesses. This correlation builds up the base for the indirect evaluation method concept of mold coating thickness. In the next step, we will choose an affordable optical camera and set a suitable light system to replace the expensive laser scanning confocal microscopy. Through a trained artificial intelligent algorithm this variant can evaluate the as-cast surface in real time. Meanwhile, the different topographies of mold surface will be also designed to analyze the correlation. In this way, the coating thickness distribution can be fast indirectly evaluated automatically. In addition, this close correlation between coating thickness and initial surface roughness of products can be used not only in casting mold coating but also in other particle spray coating methods.
The following conclusions were drawn: • There is a near-linear correlation between the ascast surface roughness parameter Sa and the coating thickness. This parameter is suitable for the classification of as-cast surface topographies. • The spraying contact angle plays an essential role in the surface roughness and coating thickness. A smaller contact angle results in a thinner coating at a rougher coating surface. The impact of contact angle should be analyzed in the next step. • The grain size of near-surface as-cast microstructure tends to increase as the coating thickness increases. The hardness of cast part tends to decrease as the coating thickness increases. | 4,390.4 | 2022-11-30T00:00:00.000 | [
"Materials Science"
] |
Rapidly predicting Kohn–Sham total energy using data-centric AI
Predicting material properties by solving the Kohn-Sham (KS) equation, which is the basis of modern computational approaches to electronic structures, has provided significant improvements in materials sciences. Despite its contributions, both DFT and DFTB calculations are limited by the number of electrons and atoms that translate into increasingly longer run-times. In this work we introduce a novel, data-centric machine learning framework that is used to rapidly and accurately predicate the KS total energy of anatase \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathrm{TiO}}_2$$\end{document}TiO2 nanoparticles (NPs) at different temperatures using only a small amount of theoretical data. The proposed framework that we call co-modeling eliminates the need for experimental data and is general enough to be used over any NPs to determine electronic structure and, consequently, more efficiently study physical and chemical properties. We include a web service to demonstrate the effectiveness of our approach.
. (Left) The DFT algorithm. (Right) The cooperative model framework for TiO 2 as an example. The first pass 1 uses DFT/DFTB to produce the minimal viable data (the smallest data size that will produce effective co-modeling). Once the co-model is constucted, we can directly submit the atomic geometry data 2 for TiO 2 to predict Kohn-Sham Total Energy without having to use DFT/DFTB. Throughout the paper we may abbreviate Kohn-Sham total energy to total energy. We use typical ML formalism describing inputs as a vector x , output as label ℓ , and classifier as f : x → ℓ. www.nature.com/scientificreports/ The remainder of the paper is as follows: "Background and related work" section we provide background on KS and an overview of ML and brief description of the members of M . In "Methodology and experimental results" section, we give a detailed description of co-modelling and show its application to the TiO 2 NPs. "Summary and conclusion" section is the summary can conclusion. Both code and data are publicly accessible. [https:// github. com/ hasan kurban/ Kohn-Sham-Total-Energy-Predi ction. git].
Background and related work
The Kohn-Sham equation. The DFTB formalism contains two major contributions [see Eq. (1)], which are matrix elements (the Hamilton and overlap) and the repulsive potentials 52 . The fundamental idea of the DFTB method is to implement the second order expansion of the Kohn-Sham (KS) DFT where the electronic density (ρ 0 ) is calculated from a reference function, where ρ 0 is the sum of neutral atomic densities. The first term of Eq. (1) represents a Kohn-Sham effective Hamiltonian; and the second term is related to the energy due to charge fluctuations where Ŝ µν = �η µ | η ν � is the overlap matrix elements. In the DFTB formalism, the matrix elements of atomic orbitals are pre-calculated and stored. Besides, the second order self-consistent charge (SCC) extension is used in the DFTB method due to the dependence of the DFTB Hamiltonian on the atomic charge. The third term is the repulsive potential which is approximated as a sum of two-center repulsions, with pair potentials based on the respective atom types and the interatomic distance Typically this problem is solved by a self-consistent approach. Schematic representation of the self-consistency cycle in Kohn-Sham equations is given in Fig. 1(left).
The method of calculations. Temperature dependent structures of anatase phase TiO 2 NPs have been obtained using molecular dynamics (MD) methods implemented in DFTB+ code 53 with the hyb-0-2 54,55 set of Slater Koster parameters. Thermal equilibrium is controlled by the NVT ensemble during whole simulations. The time step of MD was chosen as 1 fs. The temperature of the NPs was increased by 50 K up to 1000 K.
Machine learning. Related work. Ellis et al. 56 introduces an ML based framework, where the feed-forward neural network is used, to speed up DFT calculations over aluminum at ambient density (2.699 g/cc) and temperatures up to the melting point (933 K) . The authors show that their model can accurately predict solid and liquid properties of aluminum. Li et al. 57 shows the relations between density and energy can be iteratively and accurately learned with very small data and ML models, which have better generalizability, by including KS equations into the training data. KS equations are solved while learning exchange-correlation functional with neural networks which improve generalization. Chandraskeran et al. 58 demonstrates ML models can predict the electronic charge density and DOS from atomic configuration information. Instead of learning of a specific representation of the local electronic properties, the authors propose a grid-based learning and predictions scheme which is more memory intensive, but can routinely handle large systems and improve run-times. Brockherde et al. 59 perform the first MD simulation with a machine-learned density functional on malonaldehyde constructing more accurate density functionals for realistic molecular systems. Schleder et al. 60 gives an overview of ML techniques used in modern computational materials science to design novel materials and explain the present challenges and research problems.
The cooperative model framework (co-model)
We first give an informal description of co-modelling to prepare for the detailed description for Algo. 1. www.nature.com/scientificreports/ An overview of co-modelling. To aid discussion we write t ∈ N 0 for temperature Kelvin and O}} for a collection of nanoparticle position and atom type. Using DFT/DFTB we computationally determine total Kohn-Sham energy E ij given temperature t i and nanoparticles NP j (since i = j we use only one subscript). The data set has 21 values: where (t, NP) is called the feature set and E the label, building a "best" function f that, given feature values, gives energy using ML parlance. In this work we are interested in investigating how a linear ensemble of disparate ML models performs. We settled on a linear model, since it is among the simplest, best understood, and most widely used. The types of ML models are quite divers e.g., random forests, support vector machines, neural networks, k-nearest neighbor. Detailed discussion of each model is not feasible here, but links to the implementations used in this work are given in the next section. We train every model M k from a set of candidates using default parameters. Models that either individually perform poorly or are correlated are removed. An linear ensemble of models is further refined yielding f where α, β ∈ R are coefficients and constant. Using an additional tuning-parameter grid each candidate model is either optimized for its parameters or removed via cross-validation. The simplicity of Eq. (6) belies its power-any value t ∈ [0 K, 1000 K] and acceptable NP for TiO 2 can be determined directly bypassing the traditional DFT/ DFTB technique saving time while preserving fidelity discussed in "Methodology and experimental results" section ( Fig. 3).
The cooperative model framework (co-model). Presented here is a new approach to predicting
Kohn-Sham total energy of nanoparticles by combining a set of single runs of DFT/DFTB with a linear ensemble of disparate ML models we call the cooperative modelling framework (co-modelling)-the results are rapid and accurate. A model's individual performance criterion considers either (1) Mean Absolute Error (MAE), (2) Root Mean Square Error (RMSE), or (3) R Squared ( R 2 ). Co-modelling consists of five steps: 1. Compute KS energy for a small number of temperature, nanoparticles pairs. We call this data shown in Eq. (5) and Algo. lines 4-8. 2. Split the data into training data (used to build individual models) and test data (used to assess models quality). We call these data Train , Test , respectively in Algo. lines 9-10. 3. Build a set of candidate models from a large corpus of models M . In this work we are utilizing a possible size of over 230 models. A model is ignored if it is either performing poorly or takes longer than an arbitrary, fixed amount of time to complete ( > 1 hr ) in Algo. lines 10-16. 4. Of pairs of correlated models remaining, remove the poorer performing in Algo. lines 17-18. 5. Construct a linear ensemble, possibly refining the set of models further, by tuning parameters or pruning returning the function shown in Eq. (6) and Algo. lines 19-21. www.nature.com/scientificreports/ The initial model class |M| = 230 considers all regression models in caret 61 which is a popular ML library in R programming language. All models are initially run with default parameter values and become candidates if they complete computation in less than 1 hr . To construct the linear ensemble we leverage a popular R package extant in caret called caretEnsemble 62 . This package relies on caretStack that, from a collection of models, either prunes or optimally tunes parameters. Discussed next are descriptions and some characterizations of the gamut of models used. M consists of: lm: Linear regression; glm: Generalized Linear Model; lmStepAIC: Generalized Linear Model with Stepwise Feature Selection, gcvEarth: Multivariate Adaptive Regression Splines, ppr: Projection Pursuit Regression, rf: Random Forest, xgbDart,XgbTree, xgbLinear: Extreme Gradient Boosting; monmlp:Monotone Multi-Layer Perceptron Neural Network; svmLinear, svmRadial: Support Vector Machines (SVMs) with Linear and Radial Kernel Functions; knn: K-Nearest Neighbor; rpart: Decision Tree (cart); gausspr-Linear: Gaussian Process; icr: Independent Component Regression. We categorize these ML models as (1) ensemble: RF xgbDart,XgbTree, xgbLinear and (2) and non-ensemble models: lm, glm, icr, lmStepAIC, ppr, gaussprLinear, gvcEarth, svmLinear, knn, rpart, monmlp, svmRadial. Unlike non-ensemble models where only a single model is being built, ensemble models 63 consist of collections of the same models constructed randomly. More detailed description is provided next.
Co-model ensemble models. In RF the ensemble is a collection of DTs constructed with bagging (i.e., independently of each other), whereas XGB builds weak learners (with any type model) in an additive manner-one weak learner at a time. In ensembles, the final decision is given by combining the predictions of all models ("voting"). XGB lets weak learners vote along the way while RF lets all DTs vote at the end after building all trees. Each model is built using a different sample of the training data and, thus, the sampling method is among the www.nature.com/scientificreports/ factors which determine the final models. Voting strategy is another factor that can change the final prediction, e.g., weighted or unweighted voting where unweighted voting gives an equal weight to the each DT model. RF is quite popular due to its robustness, e.q., remote sensing 64 , land-cover classification 65 , network intrusion detection system 66 , sleep stage identification 67 . Error correlation among the trees and the strength of the DTs are estimated over the out of bag data which is the data remaining from bootstrap sampling. The trade-off between the margin which shows how well a single DT separates a correct class from an incorrect class and the correlations between the trees determines how well RF will perform. Breiman 68 is the first RF paper and 69 gives a review of RF algorithm. Differing approaches to improve RF; weighted voting and dynamic data reduction 70 , through sampling 71 , improving data 72 , with clustering 73 .
The stochastic gradient algorithm 74 improves 75 , the first gradient boosting algorithms for big data. Extreme Gradient Boosting (XGBoost/XGB) 76 is a scalable gradient tree boosting algorithm proven to work well in many areas, e.g., finance 77 , bioinformatics 78 , energy 79 , music 80 . Unlike RF, the cost function given in Eq. (7) is solved in an additive manner since it is not possible to optimize it in Euclidean space using the traditional optimization methods; it can be solved using second-approximation 81 .
where ŷ i represents the prediction for the i th data point and represents the trees' space. K; additive functions' number, x i ; i th data point, n; data size, m; input variables' number, t; iteration number. q is the structure of the trees and f k is an output of an independent tree structure q with a leaf weight. h denotes a differentiable convex loss function and measures the difference between the true model y and the predicted model ŷ . ω is the penalization parameter which is used to tune the complexity of the model and avoid the overfitting problem.
Co-model non-ensemble models. In this section we briefly explain non-ensemble models under two categories: (1) linear: lm 82 89 . Since the models under the same category learn the data according to different strategies, the final models they create are different from each other. For example, although lm can only learn linear relationships in the data, gvcEartch can learn non-linear relationships as well. icr decomposes training data into linear combinations of components which are independent of each other as much as possible. gvcEarth partitions input data into piece-wise linear segments with differing gradients. Linear segments are connected to obtain basis functions which are used to model linear and non-linear behaviors. svmLinear treats the "best" line problem as the maximal margin problem and solves the 2-norm cost functions (least squares error). svmRadial uses radial basis function as kernel function.
lm is Ordinary Least Squares (OLS) under the assumption that residuals distribution is Gaussian. glm searches for the best line by assuming that the distribution of residuals comes from an exponential family klike a Poisson. lmStepAIC, a type of glm, selects the input variables using an automated algorithm. In ppr in an iterative manner the regression surface is modelled as a sum of general linear combinations of smooth functions. This makes ppr more general than stepwise regression procedures. icr decomposes training data into linear combinations of components which are independent of each other as much as possible. gvcEarth partitions input data into piecewise linear segments with differing gradients. Linear segments are connected to obtain basis functions which are used to model linear and non-linear behaviors. gaussprLinear is a probabilistic regression model and a probability distribution over linear functions that fits training data. rpart is a single tree model trained by selecting an optimal attribute (feature) then partitioning data based on the class data for each value in the active domain. This is akin to Quinlan's C4.5. The optimal attribute is chosen using a variety of metrics such as Gini or information gain where each metric can lead to produce a different DT 91,94-96 . Subsequently the training data points are sorted to the leaf nodes. The algorithm runs until the training data points are classified to an acceptable threshold; otherwise it iterates over new leaf nodes. After building DTs are pruned to prevent overfitting (performing well on the training data, but poorly over test data). Pruning greatly impacts performance 95 . A neural network is a weighted, directed graph where nodes input/output and edges weights. monmlp is a recurrent network with a monotone constraint which is used to monotonically increase the behavior of model output with respect to covariates. monmlp handles overfitting using bootstrap aggregation with early stopping. In knn, the data itself is used to make a prediction (lazy learning). knn first searches for k-similar objects in the training data with some distance metric and then uses voting. The choice of k can drastically change the model's prediction.
Methodology and experimental results
In this section we first explain how we generate the minimal viable data and then present our novel machine learning framework over the TiO 2 NPs and share our findings.
Data set: TiO 2 nanoparticles. In Fig. 4 illustrates the theoretical supercell data (3-D, 2-D raw data) of TiO 2 NPs and some statistical properties of the portion of training data. The process begins by carving TiO 2 NPs from a bulk 60 × 60 × 60 supercell. Next, structures at different temperatures to attain various TiO 2 NPs www.nature.com/scientificreports/ are computed. Figure 4a illustrates three different NPs generated at 300 K , 600 K , and 900 K . Structural data is taken Kurban et al. 97 with a detailed description about the initial geometry of the TiO 2 NP model. The structural, electronic and optical properties of twenty-one TiO 2 NP models were obtained from the density functional tight-binding (DFTB) calculation 98 Fig. 4b shows statistical properties of TiO 2 NPs used in this study, e.g., distribution, correlation. The input variables are atom, three-dimensional geometric locations of Ti and O atoms (x, y, z) and temperature. The Energy variable is the output variable. We observe that the variables are linearly non-correlated. www.nature.com/scientificreports/ Using co-modelling to predict energetic properties. Let AC, TE stand for an initial atomic configuration and Kohn-Sham total energy. Treating DFTB as a function, total energy is a function solely of atomic configuration shown as computation (8): Taking temperature into account, we compute the triple (9): where AC i,t is the new atomic configuration using an initial configuration AC i paired with total energy at temperature t. The co-model is built from 21 different values described in computation (9). Our initial examination of the co-model itself might lead one to believe that temperature is the principle driver of total energy. This is not the case. The co-model is learning the relationship of the triple not simply temperature as a predictor of total energy. Our results show (including an active web service) that the triple is so well characterized that, given AC i , AC i,0 , AC i,50 , . . . , AC i,1000 and temperature t ∈ [0 K, 1000 K] , it can accurately predict TE t . The mechanics of the computation are not easy to describe. Underlying most of AI is the recurring conundrum: the model performs well, but how it works is often impossible to explain. Indeed, an active area of AI is explainability [101][102][103] . Interestingly, work is now taking place that allows AI to completely describe the phenomenon 104 . At this point, the results show an effective technique to drastically reduce run-time. It remains for future work how, if even possible, we can make some human-interpretable sense of the computation. Figure 3(top middle) shows the general stops to constructing a co-model that can efficiently, quickly, and effectively predict structural, electronic and optical properties of NPs. The framework starts with generating a minimal viable theoretical data set using DFT/DFTB (steps 1-3). The data is randomly partitioned (step 4) into training and test boosting ( |� Training | > |� Test |) . Cross-validation is a standard technique to assess the quality of models. Pearson's correlation is measured among the model cohort allowing elimination of models that are either highly correlated with a better model or perform badly. The final co-model is built and quality measured (step 5). A more detail description of these steps are provided next. In this work, total energy is eV/atom.
The application of co-modelling to anatase TiO 2 nanoparticles. Referring to Algo. 1 experimental detail is presented. The construction begins with 21 TiO 2 NPs at temperatures ranging over [0 K, 1000 K] from DFT/DFTB. Training data, Training , represents 75% of original data and test data, Test , the remaining 25%. The ratio Ti/O are the same in both Training and Test . In DFTB, the symmetries of the studied NP models were broken under heat treatment; thus, the co-model ensemble and other ML models were trained over non-symmetric NPs 105,106 . The goodness of the models use three metrics that produce values in [0, 1]: (1) Mean Absolute Error (MAE), (2) Root Mean Square Error (RMSE), (2) R Squared (R 2 ). Metrics (1) and (2) are error metrics; thus lower values are better. MAE and RMSE are distance metrics that, as the model improves, describe the deviation as a magnitude from the actual data (DFT/DFTB) and not the actual accuracy of the model. R 2 , in a regression model, represents the proportion of variance in the input variables (dependent) and that can be explained by the output variable (independent). The higher R 2 , the better model is. tenfold cross-validation is used to assess the quality of the models and the summary is given in Fig. 5.
The best performing traditional ML model is ppr in the training step. The experimental results indicate that the performances of XGBoost algorithms (xgbDart, xgbLinear, xgbTree) and rf are similar to ppr. Although some single ML models such as ppr, xqbDart, xqbLinear, perform well on training and test data, the our framework makes it possible to create more accurate models than any of these single ML models. Any new data may not be well-modelled by a single ML model; the co-model will always perform well. Figure 6 highlights pairwise training model differences: Fig. 6a shows R 2 ; Fig. 6b shows linear correlations; Fig. 6c shows a Pearson heat map. Figures for MAE and RMSE are provided as the supplementary material for space reasons. After finding the best models among various regression models, correlations among the models, created during the training step, were measured. The results show that lm-lmStepAIC, lm-gaussprLinear, glm-lm, glm-lmStepAIC,glm-gaussprLinear pairs are highly correlated. We only use gaussprLinear, since its performance is better than others over the training data. The co-model is constructed using the remainder of the cohort. Note that there was none that performed poorly over Training . The performance comparison of traditional ML models and cooperative model over the training data is presented in Fig. 7. The red-dashed line represents the cooperative model and we observe that our cooperative model performs better than the rest of the ML models. For example, over the training data, the RMSE was 1.5 × 10 −10 where ppr, the best traditional model, had 1.6 × 10 −10 RMSE. Figure 6c demonstrates the relative importance of individual models in the cooperative model. We observe that svmLinear, xgbTree, xgbLinear and xgbDART are the most important models, respectively, in the cooperative model, and svmLinear is the most dominant one. Finally, we present the performance of the cooperative model and other models over the testing data in Table 1. Based on all the metrics the results demonstrate that co-modelling performed superior than all other traditional single models. Our best model is embedded to the web and can be easily tested (https:// hasan. shiny apps. io/ app_1/).
Summary and conclusion
We have demonstrated that by pairing DFT/DFTB with a novel, data-driven ML framework-and surprisingly a modicum of data of 21 NPs-we can bypass traditional run-time bottlenecks scientists face when using DFT/ DFTB alone. Co-modelling builds a linear ensemble of models that accurately predicts the structural, electronic, and optical properties of nanoparticles (NPs). The collective time to build the ML portion is relatively minuscule and can even be done ab initio. Our solution is open-source and only requires a standard hardware-a laptop. www.nature.com/scientificreports/ what the best and minimal data can be and ensemble, whether instances of this model differ for materials or types of properties predicted, generating data for general use. Additional future work includes elucidating how the co-model is understanding the relationship possibly leading to a different, perhaps simpler, description of atomic configuration, temperature, and total energy. Another intriguing path is to reverse the computation-can we determine the most likely atomic configuration or class of configurations that would give rise to the total energy. Examining other metal nanoparticles must be investigated to ensure this result is not simply peculiar to TiO 2 . We are also interested in studying extreme temperatures where experimentation is difficult at best. Finally, code and data are publicly accessible and the cooperative model is available on the web. www.nature.com/scientificreports/ | 5,437.6 | 2022-08-24T00:00:00.000 | [
"Materials Science",
"Computer Science"
] |
Optimization of Texture Density Distribution of Carbide Alloy Micro-Textured Ball-End Milling Cutter Based on Stress Field
The insertion of micro-textures plays a role in reducing friction and increasing wear resistance of the cutters, which also has a certain impact on the stress field of the cutter during milling. Therefore, in order to study the mechanisms of friction reduction and wear resistance of micro-textured cutters in high speed cutting of titanium alloys, the dynamic characteristics of the instantaneous stress field during the machining of titanium alloys with micro-textured cutters were studied by changing the distribution density of the micro-textures on the cutter. First, the micro-texture insertion area of the ball-end milling cutter was theoretically analyzed. Then, variable density micro-textured ball-end milling cutters and non-texture cutters were used to cut titanium alloy, and the mathematical model of milling force and cutter-chip contact area was established. Then, the stress density functions of different micro-texture density cutters and non-texture cutters were established to simulate the stress fields of variable density micro-textured ball-end milling cutters and non-texture cutters. Finally, the genetic algorithm was used to optimize the variable density distribution of micro-textured cutters in which the instantaneous stress field of the cutters was taken as the optimization objective. The optimal solution for the variable density distribution of the micro-textured cutter in the cutter-chip tight contact area was obtained as follows: the texture distribution densities in the first, second, and third areas are second, and third areas are 0.0905, 0.0712, and 0.0493, respectively.
Introduction
In recent years, titanium alloy materials have been widely used in aerospace, shipbuilding, metallurgy, light industry, chemical industry, biomedical and other industries due to their excellent physical and chemical properties. However, the low thermal conductivity and high chemical activity of titanium alloys lead to severe tool wear and low cutting efficiency, which are the main factors limiting the development of titanium alloys. In the field of tribology, uneven surfaces have the function of reducing friction, and have received unprecedented attention. Therefore, the basic concept of surface texture is proposed. Preparing regularly arranged micro pits or grooves on smooth and susceptible surfaces can greatly reduce the friction and surface abrasion [1][2][3]. Micro-textures also play an active role in the field of modern cutting tools.
Milling titanium alloys is intermittent, and cutting process of the cutter is very complicated. The cutting force on the cutter is unevenly distributed during the milling process, and its size and direction change with time. The distribution of the milling force directly affects the stress distribution inside the milling insert. Therefore, it is necessary to study the stress field of the cutter during the cutting process in order to obtain the worst conditions of the cutter. Cheng and Li studied the stress density function and stress field of the corrugated edge milling cutter and concluded that, during the cutting process of the corrugated edge milling cutters, the stress on the cutter is mainly distributed on the main cutting edge near the tip, and the stress is concentrated near the tip [4,5]. Fan et al. studied the cutting stress field of ceramic tools with a gradient function by finite element analysis and obtained the optimal gradient distribution index by modeling the cutting stress field under the same cutting load [6,7]. Li et al. carried out milling force prediction experiments on the titanium alloy TC21. The results show that, in the process of high-speed milling TC21 titanium alloy, the cutting depth and feed per tooth have a greater impact on the cutting force, while the cutting speed and radial cutting depth have no significant effect on the cutting force [8,9]. Kim and Ehmann simulated the static and dynamic milling force in the face milling process. Based on the machine tool structure and fixture design, a mathematical model of the scattered force components of the face milling cutter was established. Consistent results were obtained from cutting experiments on different milling cutters and workpiece materials [10]. Wertheim et al. studied the performance of spiral and serrated edge milling inserts in the cutting process. They believed that curved-edge milling inserts can improve the stability of machining, reduce milling forces and improve chip flow [11]. Guo et al. numerically modeled and experimentally studied the micro-milling force of titanium alloy based on tool runout. The micro-milling force model was validated by analyzing the width of the steps on the edge of the groove [12,13]. Zhang simulated the stress field of a flat-faced milling cutter and a 3D complex groove milling cutter using the density function of the milling force as a boundary condition. It was found that a milling cutter with a rake angle and an edge inclination can fundamentally change the stress at the tool tip [14,15]. Sun et al. fabricated micro-grooves and micro-pits on the rake face of WC/Co tools, and then studied the cutting performance of the tools. The results show that the composite texture with micro-pits and micro-slots can be used as a micro-reservoir to continuously supplement lubricating oil, thus improving the cutting performance of the mixed micro-textured cutter [16][17][18]. Wei et al. conducted tribological and cutting experiments on aluminum alloy workpieces by sandblasting micro-materials and machining micro-geometric features of the rake face of the sapphire tools. The results show that, compared with traditional tools, the micro-textured tool edge has the lowest interfacial friction and the cutting force is significantly reduced. Machining micro-textures on the rake face of the cutter can reduce the adhesion of the workpiece material [19]. Pang et al. prepared symmetric conical micro-grooves and parallel micro-grooves on carbide cutters, and then studied the friction performance of the cutters [20]. Lin et al. modeled the cutting force of the vertical milling cutter under the conditions of oblique cutting and proposed a mechanical model for predicting the cutting force of the vertical milling cutters [21]. Li et al. used a multi-level fuzzy comprehensive evaluation method based on multi-objective decision theory to evaluate the cutting performance of micro-textured cutters in titanium alloy processing [22]. Darshan studied the improvement of tribology and thermal environment of inconel-718 alloy by textured tools. The results reveal that the textured tools perform better, ensuring lower tool wear (VB), reduced cutting forces (Fc), lower surface roughness (Ra) and acceptable chip form [23,24].
In summary, placing micro-textures on the surface of a tool to improve the friction reduction and wear resistance of the tool has become a hot topic. However, in the process of milling titanium alloys, there is still a lack of theoretical research and experimental basis for in-depth study of the anti-wear and friction reduction mechanism of micro-textured cutters. Problems such as "secondary cutting" still exist during the cutting process of micro-textured cutters. Reasonable micro-texture arrangement can make the cutters have good anti-wear and anti-friction performance, and it can also solve the secondary wear problem of the micro-textured cutters, thereby improving the processing efficiency. Therefore, in this paper, by changing the single density distribution of the micro-texture in the cutter-chip close contact area, the change of the tool stress field during cutting titanium alloys with the variable density micro-textured cutter was studied. Based on the stress field, the texture variable density distribution of the micro-textured ball-end milling cutters was optimized.
Design and Fabrication of Variable Density Micro-Textures
Previous studies of micro-textured cutters used a uniform distribution method to prepare micro-textures in the areas where the cutter-chips are in close contact [25,26]. However, from the tool wear diagram of the micro-textured cutters after cutting titanium alloy, it can be seen that the wear of the tool along the contact length and width of the cutter-chip in the compact contact area of the rake face is irregular. The wear near the cutting edge is more severe, and, as the distance from the cutting edge gets longer, the wear on the tool becomes less and less [27]. There is also some wear in the direction of chip outflow. This is because, during the outflow of chips, the cutting speed of the ball-end milling cutter along the cutting edge is different, which leads to the transverse curl of the chip. As the cutting depth increases, the flow rate at the bottom of the chip is different from flow rate at the top, and the chip curls upward. Therefore, in the process of chip deformation, "secondary cutting" occurs at the edges of the micro-texture, as shown in Figure 1. This phenomenon will cause secondary wear of micro-textured cutters.
Design and Fabrication of Variable Density Micro-Textures
Previous studies of micro-textured cutters used a uniform distribution method to prepare microtextures in the areas where the cutter-chips are in close contact [25,26]. However, from the tool wear diagram of the micro-textured cutters after cutting titanium alloy, it can be seen that the wear of the tool along the contact length and width of the cutter-chip in the compact contact area of the rake face is irregular. The wear near the cutting edge is more severe, and, as the distance from the cutting edge gets longer, the wear on the tool becomes less and less [27]. There is also some wear in the direction of chip outflow. This is because, during the outflow of chips, the cutting speed of the ball-end milling cutter along the cutting edge is different, which leads to the transverse curl of the chip. As the cutting depth increases, the flow rate at the bottom of the chip is different from flow rate at the top, and the chip curls upward. Therefore, in the process of chip deformation, "secondary cutting" occurs at the edges of the micro-texture, as shown in Figure 1. This phenomenon will cause secondary wear of micro-textured cutters. In order to solve the "secondary cutting" phenomenon of the micro-textured cutter, the region where the cutter-chip is in compact contact is divided into three regions according to the wear condition of the cutter, namely the first area X1, the second area X2, and the third area X3, as shown in Figure 2. Studies have shown that pit texture can effectively reduce friction and wear [28,29]. Therefore, by changing the density of micro-texture in each region, the dynamic evolution of secondary cutting between micro-textured cutter and the chip was studied. Experiments have shown that a texture distribution density (the ratio of the total area of the pit texture to the total area of the micro-texture preparation is defined as the texture distribution density) between 0.05 and 0.1 can play a better role in reducing friction and wear. Therefore, the micro-texture densities in the cutterchip contact area of the cemented carbide tool designed in this paper are 0.05, 0.07 and 0.09, respectively. By arranging and combining the three texture densities in the cutter-chip compact contact area, three uniformly distributed micro-textures and six variable density micro-textures were obtained. The distribution combination of the texture density is shown in Table 1. In order to solve the "secondary cutting" phenomenon of the micro-textured cutter, the region where the cutter-chip is in compact contact is divided into three regions according to the wear condition of the cutter, namely the first area X 1 , the second area X 2 , and the third area X 3 , as shown in Figure 2. Studies have shown that pit texture can effectively reduce friction and wear [28,29]. Therefore, by changing the density of micro-texture in each region, the dynamic evolution of secondary cutting between micro-textured cutter and the chip was studied. Experiments have shown that a texture distribution density (the ratio of the total area of the pit texture to the total area of the micro-texture preparation is defined as the texture distribution density) between 0.05 and 0.1 can play a better role in reducing friction and wear. Therefore, the micro-texture densities in the cutter-chip contact area of the cemented carbide tool designed in this paper are 0.05, 0.07 and 0.09, respectively. By arranging and combining the three texture densities in the cutter-chip compact contact area, three uniformly distributed micro-textures and six variable density micro-textures were obtained. The distribution combination of the texture density is shown in Table 1. According to previous studies, when the diameter, depth and distance from the cutting edge of the micro-texture are 50 μm, 35 μm and 120 μm, respectively, the micro-textured cutter can achieve better friction reduction and wear resistance [30]. Therefore, the diameter, depth and distance from the cutting edge of the micro-texture designed in this paper are 50 μm, 35 μm and 120 μm, respectively. According to the three densities of 0.05, 0.07 and 0.09, the center spacing between the micro-textures are 190 μm, 170 μm and 150 μm, respectively. The micro-textures were then prepared in three areas of the cutter-chip compact contact by using a fiber laser. After processing, the melt around the micro-texture was cleaned by sandpaper and an ultrasonic cleaner.
Design of Test Scheme
In this paper, an orthogonal experiment was used to design the experiment of milling titanium alloy with a micro-textured ball-end milling cutter. By changing cutting parameters, the change of milling force with time and the change of tool-chip contact length and width with the feed and cutting depth were studied. The orthogonal test was designed to include three factors (cutting speed, cutting depth and feed rate), which contained four levels, as shown in Table 2. L16 (4 5 ) was selected in the orthogonal table for the milling test. According to previous studies, when the diameter, depth and distance from the cutting edge of the micro-texture are 50 µm, 35 µm and 120 µm, respectively, the micro-textured cutter can achieve better friction reduction and wear resistance [30]. Therefore, the diameter, depth and distance from the cutting edge of the micro-texture designed in this paper are 50 µm, 35 µm and 120 µm, respectively. According to the three densities of 0.05, 0.07 and 0.09, the center spacing between the micro-textures are 190 µm, 170 µm and 150 µm, respectively. The micro-textures were then prepared in three areas of the cutter-chip compact contact by using a fiber laser. After processing, the melt around the micro-texture was cleaned by sandpaper and an ultrasonic cleaner.
Design of Test Scheme
In this paper, an orthogonal experiment was used to design the experiment of milling titanium alloy with a micro-textured ball-end milling cutter. By changing cutting parameters, the change of milling force with time and the change of tool-chip contact length and width with the feed and cutting depth were studied. The orthogonal test was designed to include three factors (cutting speed, cutting depth and feed rate), which contained four levels, as shown in Table 2. L 16 (4 5 ) was selected in the orthogonal table for the milling test. According to the arrangement and distribution of textures with different densities, nine combinations were obtained, corresponding to nine micro-texture cutters, and then a non-texture cutter was used for comparative analysis. Milling titanium alloy test was carried out for each cutter according to Table 2. Each cutter was tested in 16 groups, and one layer was milled on the workpiece for each set of cutting parameters. Six points were averaged along the length of the workpiece, and a set of cutting force values were measured at each point location. Then, by averaging six sets of data in each layer, the cutting force values in the X, Y, and Z directions for each set of cutting parameters were calculated. This is the basic data for the next calculation of the cutting force test formula. At the same time, the position of the center point was taken to measure the value of the milling force varying with time.
Test Equipment
In this experiment, a VDL-1000E four-axis CNC milling machine (Dalian Machine Tool, Dalian, China) was used for milling titanium alloy test. The test material was titanium alloy TC4, and the cutter was a micro-textured ball-end milling cutter. Sinusoidal tongs were used to clamp the workpiece at an inclined angle of 15 degrees. In the case of planar milling, the tool bit always participates in cutting, and the linear speed is always zero. This will accelerate the wear of the tool bit, reduce the service life of the tool, and affect the quality of the machined surface of the workpiece. Some scholars have found that when the processing angle of the workpiece is 15 degrees, the ball end milling cutter can achieve the best cutting performance [31,32]. The processing method adopted in this paper was climb milling, and the established milling test platform is shown in Figure 3. The measurement of milling force was based on a Kistler 9257B dynamometer (Kistler, Winterthur, Switzerland) with a response frequency of 5000 Hz. The data acquisition system was the DH5922_1394 signal test and analysis system of Donghua testing company (Jingjiang, China).
Appl. Sci. 2020, 10, 818 5 of 20 According to the arrangement and distribution of textures with different densities, nine combinations were obtained, corresponding to nine micro-texture cutters, and then a non-texture cutter was used for comparative analysis. Milling titanium alloy test was carried out for each cutter according to Table 2. Each cutter was tested in 16 groups, and one layer was milled on the workpiece for each set of cutting parameters. Six points were averaged along the length of the workpiece, and a set of cutting force values were measured at each point location. Then, by averaging six sets of data in each layer, the cutting force values in the X, Y, and Z directions for each set of cutting parameters were calculated. This is the basic data for the next calculation of the cutting force test formula. At the same time, the position of the center point was taken to measure the value of the milling force varying with time.
Test Equipment
In this experiment, a VDL-1000E four-axis CNC milling machine (Dalian Machine Tool, Dalian, China) was used for milling titanium alloy test. The test material was titanium alloy TC4, and the cutter was a micro-textured ball-end milling cutter. Sinusoidal tongs were used to clamp the workpiece at an inclined angle of 15 degrees. In the case of planar milling, the tool bit always participates in cutting, and the linear speed is always zero. This will accelerate the wear of the tool bit, reduce the service life of the tool, and affect the quality of the machined surface of the workpiece. Some scholars have found that when the processing angle of the workpiece is 15 degrees, the ball end milling cutter can achieve the best cutting performance [31,32]. The processing method adopted in this paper was climb milling, and the established milling test platform is shown in Figure 3. The measurement of milling force was based on a Kistler 9257B dynamometer (Kistler, Winterthur, Switzerland) with a response frequency of 5000 Hz. The data acquisition system was the DH5922_1394 signal test and analysis system of Donghua testing company (Jingjiang, China).
Analysis of Milling Force Test Results
In the process of cutting titanium alloy by an orthogonal test, a dynamometer was used to measure the change of cutting force of nine kinds of variable density micro-textured cutters and nontexture cutters with time. One of the micro-textured cutters with a texture density combination of 0.09-0.09-0.09 is selected as an example. The cutting parameters are: n = 2729 r/min, ap = 0.7 mm, fz = 0.08 mm/z, and the changes of the milling forces in the X, Y and Z directions of a milling cycle were collected, as shown in Table 3.
Analysis of Milling Force Test Results
In the process of cutting titanium alloy by an orthogonal test, a dynamometer was used to measure the change of cutting force of nine kinds of variable density micro-textured cutters and non-texture cutters with time. One of the micro-textured cutters with a texture density combination of 0.09-0.09-0.09 is selected as an example. The cutting parameters are: n = 2729 r/min, a p = 0.7 mm, f z = 0.08 mm/z, and the changes of the milling forces in the X, Y and Z directions of a milling cycle were collected, as shown in Table 3. A three-direction milling force data fitting program was written with MATLAB software, and the equations of the three-direction milling force change over time were fitted. The calculation results are (1)
Analysis of Test Results of Cutter-Chip Contact Area
Theoretically, the calculation of the contact area between the cutter and the chip is very complicated for the micro-textured ball-end milling cutter. Therefore, in the milling process, the contact diagram method was used to fit the contact area between the ball-end milling cutter and the chip. After the milling, the titanium alloy, nine texture density combinations and non-texture cutters were observed through an ultra-depth microscopy, and the cutter-chip contact area on the front of the cutter was measured. The contact length and width of the cutter-chip contact were approximated by the contact diagram method. Taking a micro-textured cutter with a texture density combination of 0.09-0.09-0.09 as an example, the experimental data of the tool-chip contact length and width obtained by measuring and fitting are shown in Table 4.
Milling Force Model of Micro-Textured Ball-End Milling Cutter
The high-speed milling of titanium alloy by micro-textured ball-end milling cutter is intermittent cutting. With the change of the micro-textured cutter from cutting in to cutting out the workpiece, the magnitude and direction of the instantaneous milling force also change, which affects the stress field distribution on the front of the cutter. Therefore, it is necessary to solve the milling cycle T, the angle of cutting into workpiece ψ in , the time of cutting into the workpiece t i and the cutting time t 0 of the micro-textured ball-end milling cutter, respectively. The solution process is as follows: where n is the spindle speed (r/min), z is the number of teeth on the tool edge, and R is the tool edge radius.
The main factors of cutting parameters affecting milling force are the cutting depth and feed per tooth. Therefore, an empirical formula model of milling force was established by using multiple linear regression method. The empirical formula model of the milling force is as follows: Taking the logarithm of both sides is Let f = lgF j , a 0 = lgC, a 1 = lga p , a 2 = lga f , the linearization of Equation (7) is The calculation and fitting were performed using MATLAB software, and the milling force test data collected from the X, Y and Z directions of nine variable density combination cutters and non-textured cutters were substituted into the calculation. Taking a micro-textured cutter with a texture density combination of 0.09-0.09-0.09 as an example, the milling forces in the X, Y and Z directions measured by orthogonal tests are shown in Table 5. The experimental data in the table were substituted into MATLAB for calculation and solution, and the coefficients and exponentials of the empirical formula for milling forces were obtained. The empirical formulae for the milling forces in three directions obtained by fitting are
Test Formula for Cutter-Chip Contact Area
In the process of cutting titanium alloy with a micro-textured ball-end milling cutter, the milling force is distributed unevenly along the length and width of the cutter-chip contact, and the cutter-chip contact area directly affects the stress density function of the cutter surface. With the change of cutting parameters and texture density in the cutter-chip compact contact area, the cutter-chip contact area also changes. Therefore, it is necessary to solve the experimental formula of the cutter-chip contact area to determine the force density function of the variable density micro-textured ball-end milling cutter when milling titanium alloy. The experimental data of tool-chip contact length l f and width l w were fitted and calculated by using MATLAB software. The relationships between l f and feed per tooth f z , as well as l w and cutting depth a p were linear functions. Taking a micro-textured cutter with a texture density of 0.09-0.09-0.09 as an example, the obtained linear functions are Substituting Equation (10) into Equation (9), the milling forces for milling a titanium alloy using a micro-textured ball-end milling cutter with a texture density of 0.09-0.09-0.09 are
Establishment of Force Density Function for Variable Density Micro-Textured Cutter
When micro-textured ball-end milling cutter cuts titanium alloy, the instantaneous milling forces in the three directions vary with time in the cutter-chip contact area, and the distributions along the cutter-chip contact length and width are uneven. Therefore, the second-order mixed partial derivative of the instantaneous milling force model was used to solve the force density function of the micro-textured ball-end milling cutter, and the instantaneous cutting force variation at a point on the cutter can be obtained by solving the force density function. By calculating the second-order mixed partial derivative of Equation (11), the force density functions of a micro-textured cutter with a texture density combination of 0.09-0.09-0.09 in the coordinate system of the machine tool can be obtained as follows: Appl. Sci. 2020, 10, 818 9 of 20 where l w ∈ (1.072, 1.929), and l f ∈ (0.5905, 0.6680). The force density functions above of the micro-textured ball-end milling cutter are solved in the coordinate system of the machine tool. However, the magnitude and direction of the instantaneous cutting force of the micro-textured ball-end milling cutter change with the rotation of the cutter from cutting in to cutting out the workpiece. The schematic diagram of the cutter from cutting in to cutting out the workpiece is shown in Figure 4. Setting XYZ as the workpiece coordinate system and XcYcZc as the tool coordinate system, Figure 4a shows the process from cutting in to cutting out of the micro-textured ball-end milling cutter, and Figure 4b shows the relationship between the coordinate systems of the tool and the machine tool during the cutting process of the micro-textured cutter. It can be seen from the Figure 4 that the coordinate system of the micro-textured cutter changes with time during the process of cutting from point A to point B. Therefore, it is necessary to transform the milling force in the coordinate system of the machine tool to solve the force density function in the coordinate system of the tool. According to the transformation relationship, its transformation matrix is Appl. Sci. 2020, 10, 818 9 of 20 where . The force density functions above of the micro-textured ball-end milling cutter are solved in the coordinate system of the machine tool. However, the magnitude and direction of the instantaneous cutting force of the micro-textured ball-end milling cutter change with the rotation of the cutter from cutting in to cutting out the workpiece. The schematic diagram of the cutter from cutting in to cutting out the workpiece is shown in Figure 4. Setting XYZ as the workpiece coordinate system and XcYcZc as the tool coordinate system, Figure 4a shows the process from cutting in to cutting out of the microtextured ball-end milling cutter, and Figure 4b shows the relationship between the coordinate systems of the tool and the machine tool during the cutting process of the micro-textured cutter. It can be seen from the Figure 4 that the coordinate system of the micro-textured cutter changes with time during the process of cutting from point A to point B. Therefore, it is necessary to transform the milling force in the coordinate system of the machine tool to solve the force density function in the coordinate system of the tool. According to the transformation relationship, its transformation matrix is According to the coordinate transformation of Equation (13), the force density function of the micro-textured cutter in the coordinate system of the cutter can be obtained as follows: The force density functions of a micro-textured cutter with a texture density combination of 0.07-0.05-0.09 are
Establishing the Tool Model
Due to the limitation of conditions, it is impossible to measure the instantaneous stress field of the cutter in real time by an experimental method. Therefore, in this paper, the finite element simulation method was used to study the distribution of the stress field of the cutter during the titanium alloy cutting process. The instantaneous change of the stress field at any point on the cutting tool was obtained by finite element simulation, which provided basic data for further optimizing the parameter design of the micro-textures. First, the cutter was modeled by SolidWorks, a micro-textured cutter with a texture density combination of 0.09-0.09-0.09 was taken as an example, and the model of the cutter is shown in Figure 5. The tool diameter is 20 mm and the micro-texture is placed in the cutter-chip compact contact area. The shape of the micro-texture is micro-pits with a diameter of 50 µm and a depth of 35 µm, and the distance from the cutting edge is 120 µm. The micro-textures are uniformly distributed, and the center distance between adjacent textures is 150 µm. The material parameters of the cutter are shown in Table 6.
Appl. Sci. 2020, 10, 818 12 of 20 textures are uniformly distributed, and the center distance between adjacent textures is 150 μm. The material parameters of the cutter are shown in Table 6. ANSYS Workbench 16.0 software (ANSYS company, Canonsburg, PA, USA) was used to simulate and analyze the instantaneous stress field of the cutter. A force distribution simulation of the micro-textured ball-end milling cutter was carried out, following the steps of inputting model, defining material attributes, partitioning meshes, defining boundary conditions, solving the problem, and analyzing images. Meshing is very important, and the quality of meshing directly determines the accuracy of simulation results. Therefore, it is necessary to refine the grids of the cutter-chip contact area. Mesh optimization was performed using the ICEM CFD module in the ANSYS Workbench. Tetrahedral mesh is suitable for fast and efficient meshing of complex models, which is realized through automatic mesh generation. Therefore, tetrahedral mesh was used in the mesh model. There are 931,230 nodes in total and the minimum edge length is 2.5 × 10 −8 m. The meshing of the cutting tool is shown in Figure 6. The fewer the optimized mesh nodes, the accurate and faster the calculation. It turns out that the force distribution for the simulated micro-textured ball-end milling cutter is very close to the force distribution in actual machining. Table 6. Constitutive parameters of the tool materials [33]. ANSYS Workbench 16.0 software (ANSYS company, Canonsburg, PA, USA) was used to simulate and analyze the instantaneous stress field of the cutter. A force distribution simulation of the micro-textured ball-end milling cutter was carried out, following the steps of inputting model, defining material attributes, partitioning meshes, defining boundary conditions, solving the problem, and analyzing images. Meshing is very important, and the quality of meshing directly determines the accuracy of simulation results. Therefore, it is necessary to refine the grids of the cutter-chip contact area. Mesh optimization was performed using the ICEM CFD module in the ANSYS Workbench. Tetrahedral mesh is suitable for fast and efficient meshing of complex models, which is realized through automatic mesh generation. Therefore, tetrahedral mesh was used in the mesh model. There are 931,230 nodes in total and the minimum edge length is 2.5 × 10 −8 m. The meshing of the cutting tool is shown in Figure 6. The fewer the optimized mesh nodes, the accurate and faster the calculation. It turns out that the force distribution for the simulated micro-textured ball-end milling cutter is very close to the force distribution in actual machining.
Appl. Sci. 2020, 10, 818 12 of 20 textures are uniformly distributed, and the center distance between adjacent textures is 150 μm. The material parameters of the cutter are shown in Table 6. ANSYS Workbench 16.0 software (ANSYS company, Canonsburg, PA, USA) was used to simulate and analyze the instantaneous stress field of the cutter. A force distribution simulation of the micro-textured ball-end milling cutter was carried out, following the steps of inputting model, defining material attributes, partitioning meshes, defining boundary conditions, solving the problem, and analyzing images. Meshing is very important, and the quality of meshing directly determines the accuracy of simulation results. Therefore, it is necessary to refine the grids of the cutter-chip contact area. Mesh optimization was performed using the ICEM CFD module in the ANSYS Workbench. Tetrahedral mesh is suitable for fast and efficient meshing of complex models, which is realized through automatic mesh generation. Therefore, tetrahedral mesh was used in the mesh model. There are 931,230 nodes in total and the minimum edge length is 2.5 × 10 −8 m. The meshing of the cutting tool is shown in Figure 6. The fewer the optimized mesh nodes, the accurate and faster the calculation. It turns out that the force distribution for the simulated micro-textured ball-end milling cutter is very close to the force distribution in actual machining.
Setting Boundary Conditions
Boundary conditions and loads should be set on the model of the cutter before performing a finite element simulation. In the actual machining process, the cutter was fixed to the cutter arbor by screw, which limited the axial and radial translation of the cutter, and then the cutter arbor rotated with the spindle. Therefore, in the finite element model, the screw hole of the cutter was set to a fixed constraints to restrict the translational movement of the cutter in the axial and radial directions, as shown in Figure 7. During the cutting process, the cutting force on the cutter is mainly caused by the squeeze between the cutter and the workpiece and the friction between the front face of the cutter and the chip, and the cutting force is equivalent to a surface load on the cutter-chip contact area of the rake face of the cutter. However, the distributions of milling forces along the length and width of the cutter-chip contact area are uneven, which is a function of time. Therefore, the force density function of the cutter calculated in the previous section was applied as a load to the cutter-chip contact area of the cutter.
Setting Boundary Conditions
Boundary conditions and loads should be set on the model of the cutter before performing a finite element simulation. In the actual machining process, the cutter was fixed to the cutter arbor by screw, which limited the axial and radial translation of the cutter, and then the cutter arbor rotated with the spindle. Therefore, in the finite element model, the screw hole of the cutter was set to a fixed constraints to restrict the translational movement of the cutter in the axial and radial directions, as shown in Figure 7. During the cutting process, the cutting force on the cutter is mainly caused by the squeeze between the cutter and the workpiece and the friction between the front face of the cutter and the chip, and the cutting force is equivalent to a surface load on the cutter-chip contact area of the rake face of the cutter. However, the distributions of milling forces along the length and width of the cutter-chip contact area are uneven, which is a function of time. Therefore, the force density function of the cutter calculated in the previous section was applied as a load to the cutter-chip contact area of the cutter.
Analysis of the Simulation Results
After setting the boundary conditions and loads, the stress field of the micro-textured ball-end milling cutter was simulated by finite element method. When the micro-textured ball-end milling cutter just cut into the workpiece as the initial time, the time to cut out the workpiece was 0.0036 s. Through simulation analysis, it can be concluded that the stress field of the micro-textured ball-end milling cutter reached the maximum value when the workpiece was in 0.002 s. The simulation results are shown in Table 7. From the simulation plots of the equivalent stress and equivalent displacement, it can be seen that a stress concentration occurred in the contact area between the cutter and the chip on the rake face of the non-textured cutter during the finishing process of titanium alloy. The reason is that, during the finishing process of titanium alloy, the plastic deformation of the workpiece causes the extrusion of the cutter and the workpiece in the cutter-chip contact area, thereby changing the metallographic structure of the cutter-chip contact area and leading to the occurrence of stress concentration. During the titanium alloy cutting process, the force and deformation of the microtextured cutter are more uniform than those of the non-textured cutter, and the stress concentration is less. The maximum deformation zone and the maximum stress value of the micro-textured cutter are smaller than those of the non-textured cutting cutter. The simulation results fully show that the micro-textures play a role in reducing friction and wear on the rake face of the cutter. Table 7. Simulation results of stress field.
Analysis of the Simulation Results
After setting the boundary conditions and loads, the stress field of the micro-textured ball-end milling cutter was simulated by finite element method. When the micro-textured ball-end milling cutter just cut into the workpiece as the initial time, the time to cut out the workpiece was 0.0036 s. Through simulation analysis, it can be concluded that the stress field of the micro-textured ball-end milling cutter reached the maximum value when the workpiece was in 0.002 s. The simulation results are shown in Table 7. From the simulation plots of the equivalent stress and equivalent displacement, it can be seen that a stress concentration occurred in the contact area between the cutter and the chip on the rake face of the non-textured cutter during the finishing process of titanium alloy. The reason is that, during the finishing process of titanium alloy, the plastic deformation of the workpiece causes the extrusion of the cutter and the workpiece in the cutter-chip contact area, thereby changing the metallographic structure of the cutter-chip contact area and leading to the occurrence of stress concentration. During the titanium alloy cutting process, the force and deformation of the micro-textured cutter are more uniform than those of the non-textured cutter, and the stress concentration is less. The maximum deformation zone and the maximum stress value of the micro-textured cutter are smaller than those of the non-textured cutting cutter. The simulation results fully show that the micro-textures play a role in reducing friction and wear on the rake face of the cutter. As shown in Figure 8, during the process of cutting titanium alloy, Origin software 2017 (Originlab, Northampton, MA, USA) was used to plot and analyze the relationship between the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-texture cutter. It can be seen from Figure 8 that, when the workpiece is cut in 0.002 s, the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-textured cutter reach the maximum value. The instantaneous stress and deformation in the non-textured cutter during the process of cutting titanium alloy are greater than those in the variable density micro-textured cutter. Due to the change of the texture density in the cutter-chip close contact area, the "secondary cutting" phenomenon of the micro-textured cutter during the process of cutting titanium alloys is effectively reduced. It can be seen from the simulation results that the microtextured cutter can not only reduce friction and wear, but also improve the stress distribution of the cutter. By changing the distribution density of the texture on the cutter, the "secondary cutting" phenomenon during the process of micro-textured cutter milling titanium alloy can be effectively reduced. As shown in Figure 8, during the process of cutting titanium alloy, Origin software 2017 (Originlab, Northampton, MA, USA) was used to plot and analyze the relationship between the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-texture cutter. It can be seen from Figure 8 that, when the workpiece is cut in 0.002 s, the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-textured cutter reach the maximum value. The instantaneous stress and deformation in the non-textured cutter during the process of cutting titanium alloy are greater than those in the variable density micro-textured cutter. Due to the change of the texture density in the cutter-chip close contact area, the "secondary cutting" phenomenon of the micro-textured cutter during the process of cutting titanium alloys is effectively reduced. It can be seen from the simulation results that the microtextured cutter can not only reduce friction and wear, but also improve the stress distribution of the cutter. By changing the distribution density of the texture on the cutter, the "secondary cutting" phenomenon during the process of micro-textured cutter milling titanium alloy can be effectively reduced. As shown in Figure 8, during the process of cutting titanium alloy, Origin software 2017 (Originlab, Northampton, MA, USA) was used to plot and analyze the relationship between the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-texture cutter. It can be seen from Figure 8 that, when the workpiece is cut in 0.002 s, the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-textured cutter reach the maximum value. The instantaneous stress and deformation in the non-textured cutter during the process of cutting titanium alloy are greater than those in the variable density micro-textured cutter. Due to the change of the texture density in the cutter-chip close contact area, the "secondary cutting" phenomenon of the micro-textured cutter during the process of cutting titanium alloys is effectively reduced. It can be seen from the simulation results that the microtextured cutter can not only reduce friction and wear, but also improve the stress distribution of the cutter. By changing the distribution density of the texture on the cutter, the "secondary cutting" phenomenon during the process of micro-textured cutter milling titanium alloy can be effectively reduced. As shown in Figure 8, during the process of cutting titanium alloy, Origin software 2017 (Originlab, Northampton, MA, USA) was used to plot and analyze the relationship between the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-texture cutter. It can be seen from Figure 8 that, when the workpiece is cut in 0.002 s, the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-textured cutter reach the maximum value. The instantaneous stress and deformation in the non-textured cutter during the process of cutting titanium alloy are greater than those in the variable density micro-textured cutter. Due to the change of the texture density in the cutter-chip close contact area, the "secondary cutting" phenomenon of the micro-textured cutter during the process of cutting titanium alloys is effectively reduced. It can be seen from the simulation results that the microtextured cutter can not only reduce friction and wear, but also improve the stress distribution of the cutter. By changing the distribution density of the texture on the cutter, the "secondary cutting" phenomenon during the process of micro-textured cutter milling titanium alloy can be effectively reduced. As shown in Figure 8, during the process of cutting titanium alloy, Origin software 2017 (Originlab, Northampton, MA, USA) was used to plot and analyze the relationship between the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-texture cutter. It can be seen from Figure 8 that, when the workpiece is cut in 0.002 s, the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-textured cutter reach the maximum value. The instantaneous stress and deformation in the non-textured cutter during the process of cutting titanium alloy are greater than those in the variable density micro-textured cutter. Due to the change of the texture density in the cutter-chip close contact area, the "secondary cutting" phenomenon of the micro-textured cutter during the process of cutting titanium alloys is effectively reduced. It can be seen from the simulation results that the microtextured cutter can not only reduce friction and wear, but also improve the stress distribution of the cutter. By changing the distribution density of the texture on the cutter, the "secondary cutting" phenomenon during the process of micro-textured cutter milling titanium alloy can be effectively reduced. As shown in Figure 8, during the process of cutting titanium alloy, Origin software 2017 (Originlab, Northampton, MA, USA) was used to plot and analyze the relationship between the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-texture cutter. It can be seen from Figure 8 that, when the workpiece is cut in 0.002 s, the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-textured cutter reach the maximum value. The instantaneous stress and deformation in the non-textured cutter during the process of cutting titanium alloy are greater than those in the variable density micro-textured cutter. Due to the change of the texture density in the cutter-chip close contact area, the "secondary cutting" phenomenon of the micro-textured cutter during the process of cutting titanium alloys is effectively reduced. It can be seen from the simulation results that the microtextured cutter can not only reduce friction and wear, but also improve the stress distribution of the cutter. By changing the distribution density of the texture on the cutter, the "secondary cutting" phenomenon during the process of micro-textured cutter milling titanium alloy can be effectively reduced.
Non-textured cutter
Maximum value: 6.14 × 10 9 As shown in Figure 8, during the process of cutting titanium alloy, Origin software 2017 (Originlab, Northampton, MA, USA) was used to plot and analyze the relationship between the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-texture cutter. It can be seen from Figure 8 that, when the workpiece is cut in 0.002 s, the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-textured cutter reach the maximum value. The instantaneous stress and deformation in the non-textured cutter during the process of cutting titanium alloy are greater than those in the variable density micro-textured cutter. Due to the change of the texture density in the cutter-chip close contact area, the "secondary cutting" phenomenon of the micro-textured cutter during the process of cutting titanium alloys is effectively reduced. It can be seen from the simulation results that the microtextured cutter can not only reduce friction and wear, but also improve the stress distribution of the cutter. By changing the distribution density of the texture on the cutter, the "secondary cutting" phenomenon during the process of micro-textured cutter milling titanium alloy can be effectively reduced. As shown in Figure 8, during the process of cutting titanium alloy, Origin software 2017 (Originlab, Northampton, MA, USA) was used to plot and analyze the relationship between the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-texture cutter. It can be seen from Figure 8 that, when the workpiece is cut in 0.002 s, the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-textured cutter reach the maximum value. The instantaneous stress and deformation in the non-textured cutter during the process of cutting titanium alloy are greater than those in the variable density micro-textured cutter. Due to the change of the texture density in the cutter-chip close contact area, the "secondary cutting" phenomenon of the micro-textured cutter during the process of cutting titanium alloys is effectively reduced. It can be seen from the simulation results that the microtextured cutter can not only reduce friction and wear, but also improve the stress distribution of the cutter. By changing the distribution density of the texture on the cutter, the "secondary cutting" phenomenon during the process of micro-textured cutter milling titanium alloy can be effectively reduced.
As shown in Figure 8, during the process of cutting titanium alloy, Origin software 2017 (Originlab, Northampton, MA, USA) was used to plot and analyze the relationship between the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-texture cutter. It can be seen from Figure 8 that, when the workpiece is cut in 0.002 s, the equivalent stress and the equivalent displacement of the variable density micro-textured cutters and the non-textured cutter reach the maximum value. The instantaneous stress and deformation in the non-textured cutter during the process of cutting titanium alloy are greater than those in the variable density micro-textured cutter. Due to the change of the texture density in the cutter-chip close contact area, the "secondary cutting" phenomenon of the micro-textured cutter during the process of cutting titanium alloys is effectively reduced. It can be seen from the simulation results that the micro-textured cutter can not only reduce friction and wear, but also improve the stress distribution of the cutter. By changing the distribution density of the texture on the cutter, the "secondary cutting" phenomenon during the process of micro-textured cutter milling titanium alloy can be effectively reduced.
Optimization of Variable Density Distribution of Micro-Textured Cutter
Through the simulation of the stress field in milling titanium alloy with variable density microtextured ball-end milling cutter, it is concluded that the instantaneous stress field and the maximum stress value of the cutter are directly affected by the different texture distribution densities on the rake face of the cutter. Therefore, it is necessary to establish the relationship between the different distribution densities of the micro-textures and the instantaneous stress field of the cutter, so as to optimize the texture density in the cutter-chip compact contact area and to obtain the best combination of texture distribution density, which provides a new concept for the design of microtextured cutter.
In this paper, a genetic algorithm was used to optimize the variable density distribution of micro-textured cutter. The instantaneous stress field of the cutter was taken as the optimization objective and the texture density X1 in the first area, X2 in the second area, and X3 in the third area of the cutter-chip compact contact area were taken as the optimization variables. When the genetic algorithm was adopted for optimization, the objective function should be established first. In this paper, the instantaneous stress field of a micro-textured cutter with a variable density was taken as the optimization objective, so a prediction model of the instantaneous stress field of the cutter was established as the objective function of the optimization model. The mathematical model was used to establish the instantaneous stress model of the variable density micro-textured cutter with respect to the variables X1, X2 and X3:
Optimization of Variable Density Distribution of Micro-Textured Cutter
Through the simulation of the stress field in milling titanium alloy with variable density micro-textured ball-end milling cutter, it is concluded that the instantaneous stress field and the maximum stress value of the cutter are directly affected by the different texture distribution densities on the rake face of the cutter. Therefore, it is necessary to establish the relationship between the different distribution densities of the micro-textures and the instantaneous stress field of the cutter, so as to optimize the texture density in the cutter-chip compact contact area and to obtain the best combination of texture distribution density, which provides a new concept for the design of micro-textured cutter.
In this paper, a genetic algorithm was used to optimize the variable density distribution of micro-textured cutter. The instantaneous stress field of the cutter was taken as the optimization objective and the texture density X 1 in the first area, X 2 in the second area, and X 3 in the third area of the cutter-chip compact contact area were taken as the optimization variables. When the genetic algorithm was adopted for optimization, the objective function should be established first. In this paper, the instantaneous stress field of a micro-textured cutter with a variable density was taken as the optimization objective, so a prediction model of the instantaneous stress field of the cutter was established as the objective function of the optimization model. The mathematical model was used to establish the instantaneous stress model of the variable density micro-textured cutter with respect to the variables X 1 , X 2 and X 3 : where C denotes the correlation coefficient of the prediction model and α 1 , α 2 , and α 3 denote the undetermined indices of the related independent variables. The logarithm of the two sides of the Equation (25) is If y = lgσ, α 0 = lgC, x 1 = lgX 1 , x 2 = lgX 2 , x 3 = lgX 3 , Equation (26) is transformed into a linear equation as follows: According to Equation (27) and the stress field simulation data of the variable density micro-textured ball-end milling cutter, a multiple linear regression equation was established by the least-square method: where ε i denotes a random error. The stress field simulation data of the variable density micro-textured ball-end milling cutter was substituted into Equation (28). Then, MATLAB R2017b software was used to regress the experimental data through multiple linear regression, and the prediction model of the instantaneous stress field of the variable density micro-textured ball-end milling cutter was obtained as follows: During the process of finishing titanium alloy with the micro-textured ball-end milling cutter, the instantaneous stress field of the micro-textured cutter is affected by the texture density distribution in the first, second and third regions where the cutter and chip are in close contact under the same cutting parameters. Therefore, in order to optimize the variable density distribution of the micro-textured cutters based on the stress field, the constraints are that the texture density of all three regions where the cutter and chip are in close contact as 0.01 < X i < 0.1 (i = 1, 2, 3).
Taking the instantaneous stress field of the micro-textured cutter as the evaluation standard and the above constraints as the boundary conditions, the genetic algorithm was used to optimize the variable density distribution of the micro-textured cutters. In order to ensure the accuracy of the optimization results, when the variable density distribution of the micro-textured cutters is optimized by the genetic algorithm, optimization parameters should be set in the genetic algorithm toolbox. The population size set in this paper is 300, the crossover probability is 0.95, and the mutation probability is 0.01. Finally, the genetic algorithm toolbox was used to solve the optimization model. The optimization results of the genetic algorithm are shown in Figure 9. The optimal solution of the variable density distribution of the micro-textured cutter in the cutter-chip compact contact area was obtained through the optimization solution. The texture distribution density X 1 in the first region, X 2 in the second region, and X 3 in the third region are 0.0905, 0.0712, and 0.0493, respectively.
Conclusions
Aiming at the problem of "secondary cutting" during the process of finishing titanium alloy by the micro-textured ball-end milling cutter, in this article, the mechanism of friction reduction and wear resistance of micro-textured cutters were studied in detail. By changing the distribution density of the micro-texture on the cutter, the dynamic characteristics of the instantaneous stress field during the process of milling titanium alloy by the micro-textured cutter were studied, and the following conclusions were drawn: (1) Through milling titanium alloy experiments, the milling force models and the cutter-chip contact area mathematical models of cutters with different micro-textured densities and non-texture were established. By solving the milling force model and the experimental formula of cutter-chip contact area, the force density functions of the cutters with the different micro-texture densities and non-texture were obtained. It provides a theoretical basis for studying the stress field of the variable density micro-textured cutters.
(2) The instantaneous stress fields of different density textured cutters and non-textured cutters during the process of milling titanium alloy were simulated. The simulation results show that, during the process of milling titanium alloy, stress concentration will occur in the cutter-chip contact area of the rake face of the non-textured cutters. The force and deformation of the micro-textured cutters are more uniform than those of the non-textured cutters, and there is less stress concentration, and the maximum deformation area and maximum stress value of the micro-textured cutters are smaller than those of the non-textured cutters.
(3) Taking the instantaneous stress field as the objective function, the genetic algorithm was used to optimize the variable density distribution of the micro-textured cutters, and the optimal solution of the variable density distribution of the micro-textured cutters in the cutter-chip compact contact area was obtained. The texture distribution density X1 in the first region, X2 in the second region, and X3 in the third region are 0.0905, 0.0712 and 0.0493, respectively.
Conflicts of Interest:
The authors declare no conflict of interest.
Conclusions
Aiming at the problem of "secondary cutting" during the process of finishing titanium alloy by the micro-textured ball-end milling cutter, in this article, the mechanism of friction reduction and wear resistance of micro-textured cutters were studied in detail. By changing the distribution density of the micro-texture on the cutter, the dynamic characteristics of the instantaneous stress field during the process of milling titanium alloy by the micro-textured cutter were studied, and the following conclusions were drawn: (1) Through milling titanium alloy experiments, the milling force models and the cutter-chip contact area mathematical models of cutters with different micro-textured densities and non-texture were established. By solving the milling force model and the experimental formula of cutter-chip contact area, the force density functions of the cutters with the different micro-texture densities and non-texture were obtained. It provides a theoretical basis for studying the stress field of the variable density micro-textured cutters.
(2) The instantaneous stress fields of different density textured cutters and non-textured cutters during the process of milling titanium alloy were simulated. The simulation results show that, during the process of milling titanium alloy, stress concentration will occur in the cutter-chip contact area of the rake face of the non-textured cutters. The force and deformation of the micro-textured cutters are more uniform than those of the non-textured cutters, and there is less stress concentration, and the maximum deformation area and maximum stress value of the micro-textured cutters are smaller than those of the non-textured cutters.
(3) Taking the instantaneous stress field as the objective function, the genetic algorithm was used to optimize the variable density distribution of the micro-textured cutters, and the optimal solution of the variable density distribution of the micro-textured cutters in the cutter-chip compact contact area was obtained. The texture distribution density X 1 in the first region, X 2 in the second region, and X 3 in the third region are 0.0905, 0.0712 and 0.0493, respectively. | 13,647.2 | 2020-01-23T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
of Kufa for Chemical Sciences
Abstruct The present work included the synthesis of some new Schiff base derivatives of hydrazine hydrate coupled with ethyl-5-phenyl-2-(1,3,4-oxadiazole thiol) acetate, this role reacted with mercapto-acetic acid to synthesize five-membered ring heterocyclic compound derivatives. The yields of all synthesized compounds were good. All compounds were confirmed by their melting point, FT-IR spectra
Introduction
Oxadiazoles are a monocyclic ring system with multiple applications.Although the 1,3,4oxadiazole ring system was known in 1880, the proper study of its chemistry, structure, physicalproperties, and application of various derivatives began only in 1950 [1] ,N-containing heterocycles, especially five-membered rings, are of great interest as they are found in natural products [2] and used frequently in medicinal chemistry.There are three known isomers:1,2,4oxadiazole, 1,2,3-oxadiazole and 1,2,5-oxadiazole.Amongst oxadiazole isomers, 1,3,4oxadiazole derivatives are known to be the most stable [3].
General
The used chemicals were obtained from Sigma Aldrich.Melting points were recorded by Gallen-Kamp MFB-600 melting point apparatus.The FT-IR spectra were recorded on an FT-IR-8400S-Shimadzu spectrophotometer. 1 H NMR spectra were recorded on Bruker 400MHZ spectrophotometer (Germany), deuterated solvents (DMSO-d 6 ) were used for sample preparation, and tetramethylsilane TMS was used as an internal standard.
Synthesis of 5-phenyl-2-mercapto (1,3,4-oxadiazole) S2
A mixture of phenyl hydrazide (1 g, 0.007 mol) and an excess of carbon disulfide (2.5 mL) in absolute ethanol (25 mL) with an alkaline medium (potassium hydroxide 0.4 gm), The reaction was refluxed for 8 hrs.After checking the finish, all the H 2 S gas stopped by lead acetate was added drops of HCl 10% for neutralization of the base (the precipitate was filtered off and washed with water to afford the desired compounds (potassium chlorides salt) [16].
After that, the mixture was cooled down and the precipitated product was filtrated, washed with cold water, dried, and recrystallized from ethanol [19].
Synthesis of 4-thiazolidinone derivatives S14-S22
A mixture of Schiff bases S5-S13 (0.001mol) and excess of thioglycolic acid (0.002 mol) in ethanol was refluxed for 18-20 hrs.The solvent was evaporated and the residue was neutralized with 5% sodium bicarbonate solution to remove excess of thioglycolic acid.The formed precipitate was filtered, washed several times with distilled water and recrystallized from ethanol [20].
Results and discussion
This work includes synthesis of new heterocyclic ring derivatives | 518 | 2023-08-28T00:00:00.000 | [
"Chemistry"
] |
Through-Plane Super-Resolution With Autoencoders in Diffusion Magnetic Resonance Imaging of the Developing Human Brain
Fetal brain diffusion magnetic resonance images (MRI) are often acquired with a lower through-plane than in-plane resolution. This anisotropy is often overcome by classical upsampling methods such as linear or cubic interpolation. In this work, we employ an unsupervised learning algorithm using an autoencoder neural network for single-image through-plane super-resolution by leveraging a large amount of data. Our framework, which can also be used for slice outliers replacement, overperformed conventional interpolations quantitatively and qualitatively on pre-term newborns of the developing Human Connectome Project. The evaluation was performed on both the original diffusion-weighted signal and the estimated diffusion tensor maps. A byproduct of our autoencoder was its ability to act as a denoiser. The network was able to generalize fetal data with different levels of motions and we qualitatively showed its consistency, hence supporting the relevance of pre-term datasets to improve the processing of fetal brain images.
INTRODUCTION
The formation and maturation of white matter are at their highest rate during the fetal stage of human brain development. To have more insight into this critical period, in utero brain imaging techniques offer a unique opportunity. Diffusion weighted-magnetic resonance imaging (DW-MRI) is a well-established tool to reconstruct in vivo and non-invasively the white matter tracts in the brain (1,2). Fetal DW-MRI, in particular, could characterize early developmental trajectories in brain connections and microstructure (3)(4)(5)(6). Hence, fetal DW-MRI has been of significant interest for the past years where studies (7)(8)(9) have provided analysis using diffusion tensor imaging (DTI) by computing diffusion scalar maps such as fractional anisotropy (FA) or mean diffusivity (MD), using a limited number of gradient directions. A recent study focused on reconstructing fiber Orientation Distribution Functions (fODF) (10) using higher quality datasets and rich information including several gradient directions (32 and 80), higher b-values (750 and 1,000 s/mm 2 ), and signal-to-noise ratio (SNR) (3 Tesla magnetic field strength). Additionally, the datasets were acquired in a controlled and uniform research setting with healthy volunteers, which can hardly be reproduced in the clinical environment.
Albeit promising results, acquiring high-quality data remains the main obstacle in the field of fetal brain imaging. First, unpredictable and uncontrollable fetal motion is a major challenge. To overcome this problem, fast echo-planar imaging (EPI) sequences are typically used to freeze intra-slice motion. However, intra-and inter-volume motion still have to be addressed in the post-processing steps using sophisticated slice-to-volume registration (SVR) (11)(12)(13). Moreover, EPI sequences generate severe non-linear distortions that need adapted distortion correction algorithms (14). Additionally, the resulting images display low SNR due to at least three factors: the inherently small size of the fetal brain, the surrounding maternal structures and amniotic fluid, and the increased distance to the coils. In order to compensate for the low SNR in EPI sequences, series with thick voxels (i.e., low through-plane resolution) are often acquired. Finally, to shorten the acquisition time, small bvalues (b = 400 − 700s/mm 2 ) and a low number of gradient directions (10)(11)(12)(13)(14)(15) (8,9) are commonly used in fetal imaging, which in turn will result in a low angular resolution.
Clinical protocols typically acquire several anisotropic orthogonal series of 2D thick slices to cope with high motion and low SNR. Then, super-resolution reconstruction techniques that have been originally developed for structural T2-weighted images (15)(16)(17)(18)(19)(20) by combining different 3D low resolution volumes have also been successfully applied in 4D fetal functional (21) and diffusion MRI contexts (10,12). Still, despite these two pioneer works, super-resolution DW-MRI from multiple volumes has been barely explored in vivo. In fact, the limited scanning time to minimize maternal discomfort hampers the acquisition of several orthogonal series, resulting in a trade-off between the number of gradient directions and orthogonal series. Thus, DW-MRI fetal brain protocols are not standardized from one center to another (Supplementary Table S1) and more experiments have to be conducted in this area to design optimal sequences (22,23). Sequence-based super-resolution methods that were applied in adult brains (24)(25)(26)(27) could also be explored and adapted to fetal brains such as in Ning et al. (24) that acquire same orientation shifted low-resolution images in the slice encoding direction and in a non-overlapping gradient scheme to reconstruct one high-resolution volume using compressed sensing. The term super-resolution is used by both the image processing and the MR sequence development communities, though in a slightly different way. While the former works mainly on image space and the latter works on k-space, both aim at increasing the image resolution at different stages either using multiple volumes or single volumes.
In fact, fetal DW-MRI resolution enhancement could also benefit from single image super-resolution approaches, i.e., either within each DW-MRI 3D volume separately or using the whole 4D volume including all diffusion measurements. It has indeed been demonstrated that a linear or cubic interpolation of the raw signal enhances the resulting scalar maps and tractography (28). In practice, this is typically performed either at the signal level or at DTI scalar maps (29). We believe that single volume and multiple volumes super-resolution can also be performed together, i.e., where the output of the former is given as the input of the latter. This aggregation could potentially lead to a better motion correction and hence to a more accurate final high-resolution volume.
Several studies have proposed single image super-resolution enhancement methods for DW-MRI but, to the best of our knowledge, none of them has been applied neither to anisotropic datasets nor to the developing brain. In Coupé et al. (30), the authors utilized a non-local patch-based approach in an inverse problem paradigm to improve the resolution of adult brain DW-MRI volumes using a non diffusion weighted image (b = 0s/mm 2 ) prior. Although this approach yielded competitive results, it was built upon a sophisticated pipeline which made it not extensively used. The first machine learning study (31) have used shallow learning algorithms to learn the mapping between diffusion tensor maps of a downsampled high-resolution image and the maps of the original image. Recently, deep learning models which can implicitly learn relevant features from training data were used to perform single image super-resolution with a convolutional neural network (32,33) and a customized U-Net (34,35). Both approaches produced promising results in a supervised learning scheme. Supervision needs however large high quality datasets that are scarce for the perinatal brain for the reasons enumerated above.
The specific challenge of fetal DW-MRI is 3-5 mm acquired slice thickness, with only a few repetitions available. Hence, our main objective is to focus on through-plane DW-MRI resolution enhancement. This would be valuable not only for native anisotropic volumes but also for outlier slice recovery. In fact, motion-corrupted slices in DW-MRI is either discarded, which results in a loss of information, or replaced using interpolation (36)(37)(38). We approached this problem from an image synthesis point of view using unsupervised learning networks such as autoencoders (AEs), as demonstrated in cardiac T2-weighted MRI (39) and recent works in DW-MRI (40). Here, we present a framework with autoencoders that are neural networks learning in an unsupervised way to encode efficient data representations and can behave as generative models if this representation is structured enough. By accurately encoding DW-MRI slices in a low-dimensional latent space, we were able to successfully generate new slices that accurately correspond to in-between "missing" slices. In contrast to the above referred supervised learning approaches, this method is scale agnostic, i.e., the enhancement scale factor can be set a posteriori to the network training.
Realistically enhancing the through-plane resolution would potentially help the clinicians to better assess whether the anterior and posterior commissures are present in cases with complete agenesis of the corpus callosum (6). It can reduce partial volume effects and thus contribute to the depiction of more accurate white matter properties in the developing brain.
In this work, we present the first unsupervised throughplane resolution enhancement for perinatal brain DW-MRI. We leverage the high-quality dataset of the developing Human Connectome Project (dHCP) where we train and quantitatively The distribution of gestational ages is shown in Supplementary Figure S1.
validate pre-term newborns that are anatomically close to fetal subjects. We finally demonstrate the performance of our approach in fetal brains.
Pre-term dHCP Data
We selected all the 31 pre-term newborns of 37 gestational weeks (GW) or less at the time of scan (range: [29.3,37.0], mean: 35.5, median: 35.7) from the dHCP dataset (41) (subject IDs in Supplementary Table S2). Acquisitions were performed using a 3T Philips Achieva scanner (32-channel neonatal head-coil and 70 mT/m gradients) with a monopolar spin-echo EPI Stejksal-Tanner sequence ( = 42.5 ms, δ = 14 ms, TR = 3,800 ms, TE = 90,ms, echo spacing = 0.81ms, EPI factor = 83) and a multiband factor of 4, resulting in an acquisition time of 19:20 min. In a field of view of 150x150x102 mm 3 , 64 interleaved slices were acquired with an in-plane resolution of 1.5 mm, a slice thickness of 3 mm, and a slice overlap of 1.5 mm. An isotropic volume of 1.5 mm 3 was obtained after super-resolution. The dataset was acquired with a multi-shell sequence using four bvalues (b ∈ {0, 400, 1, 000, and 2, 600}s/mm 2 ) with 300 volumes but we have only extracted the 88 volumes corresponding to b = 1, 000s/mm 2 (b1000) as a compromise of high contrast-tonoise ratio (CNR), i.e., b1000 has a higher CNR than b400 and b2600 (42), and proximity to the b = 700s/mm 2 that is typically used in clinical settings for fetal DW-MRI. The main attributes of the pre-term data are summarized in Table 1. Brain masks and region/tissue labels segmented using a pipeline based on the Draw-EM algorithm (43,44) were available in the corresponding anatomical dataset. All the images were already corrected (42) for inter-slice motion and distortion (susceptibility, eddy currents and motion). After pre-processing, the final image resolution and FOV were, respectively, 1.17x1.17x1.5 mm 3 and 128x128x64 mm 3 .
Fetal Data
Fetal acquisitions were performed at 1.5T (MR450, GE Healthcare, Milwaukee, WI, USA) in the University Children's Hospital Zürich (KISPI) using a single-shot EPI sequence (TE = 63 ms, TR = 2200 ms) and 15 gradient directions at b = 700s/mm 2 (b700). The acquisition time was approximately 1.3 min per 4D volume. The in-plane resolution was 1x1 mm 2 , the slice thickness was 4-5 mm, and the field of view 256x256x14 − 22 voxels. Three axial series and a coronal one were acquired for each subject. Brain masks were manually generated for the b0 (b = 0s/mm 2 ) of each acquisition and automatically propagated to the diffusion-weighted volumes. Between 8 and 18, T2-weighted images were also acquired for each subject where corresponding brain masks were automatically generated using an in-house deep learning based method using transfer learning from Salehi et al. (45). Manual refinements were needed for a few cases at the brain boundaries.
Fetal Data Processing
We selected three subjects with high quality imaging and without motion artifacts (24, 29, and 35 GW) and three subjects with a varying degree of motion (23, 24, and 27 GW). Supplementary Figure S1 shows the distribution of gestational age of both 31 pre-term newborns and the 6 fetal subjects used in this study. A DW-MRI volume of a motion-free case (Sub-2, 29 GW) and a pre-term of equivalent age are illustrated in Figure 1. By performing quality control, we discarded highly corrupted volumes due to motion resulting in severe signal drops in two moving subjects and very low SNR volumes in one motion-free subject. Table 2 presents the different characteristics of each subject as well as its corresponding discarded volumes. The coronal volume was not used to avoid any interpolation confounding factor while co-registering different orientations. All the subjects were pre-processed for noise, bias field inhomogeneities, and distortions using the Nipype framework (46). The denoising was performed using a Principal Component Analysis based method (47), followed by an N4 bias-field inhomogeneity correction (48). Distortion was corrected using an in-house implementation of a state-of-the-art algorithm for the fetal brain (14) consisting in rigid registration (49) of a structural T2-weighted image to the b0 image, followed by a nonlinear registration (49) in the phase-encoding direction of the b0 to the same T2-weighted image. The transformation was then applied to the diffusion-weighted volumes. A block matching algorithm for symmetric global registration was also performed for two subjects (sub-4, sub-6) with motion [NiftyReg, (50)]. The b0 image of the first axial series was selected as a reference to which we subsequently registered the remaining volumes, i.e., the non b0 images from the first axial and all volumes from the two others. Gradient directions were rotated accordingly. Supplementary Figure S2 shows an example of a DWI volume (from sub-4) of original, pre-processing, and motion correction.
Architecture
Our network architecture, similarly to Sander et al. (39), is composed of four blocks in the encoder and four in the decoder (Figure 2). Each block in the encoder consists of two layers made of 3 x 3 convolutions followed by a batch normalization (51) and an Exponential Linear Unit non-linearity. The number of feature maps is doubled from 32 after each layer and the resulting feature maps are average-pooled. We further added two layers of two 3 x 3 convolutions in which the feature maps of the last layer were used as the latent-space of the autoencoder. The decoder uses the same architecture as the encoder but by conversely halving the number of feature maps and upsampling after each block using nearest-neighbor interpolation. At the final layer a 1 x 1 convolution using the sigmoid function is applied to output the predicted image. The number of network parameters is 6,098,689.
Training and Optimization
We have trained our network solely on b0 images (15 per subject), using an 8-fold nested cross validation where we trained and validated on 27 subjects and tested on four. The proportion of the validation data was set to 15% of the training set. The training/validation set contains 25,920 slices of a 128 x 128 field of view, totaling 424,673,280 voxels. Our network was trained in an unsupervised manner by feeding normalized 2D axial slices that are encoded as feature maps in the latent space. The number of feature maps, and hence the dimensionality of the latent space, was optimized (optimal value to 32) using Keras-Tuner (52). The batch size and the learning rate were additionally optimized and set to 32 and 5e-5, respectively. The network that was initialized using (53) was trained for 200 epochs to minimize the mean squared error loss between the predicted and the ground truth image. We have utilized for this aim the Adam optimizer (54) with the default parameters β 1 = 0.5, β 2 = 0.999, and the network corresponding to the epoch with the minimal validation loss was then selected. The implementation was performed in the framework of TensorFlow 2.4.1 (55) and an Nvidia GeForce RTX 2080 GPU was deployed for training. Network code and checkpoint examples can be found in our Github repository 1 .
Inference
The network trained on b0 images was used for the inference of b0 and b1000 volumes. Two slices were encoded in the latent space and their N "in-between" slice(s) (N = 1,2 in our experiments) were predicted using weighted averages of the latent codes of the two slices. The weights for N = 1 and N = 2 were set proportionally to their distance to the neighboring original slices [as performed in Sander et al. (39)], i.e., an equal weighting for N = 1 and { 1 3 , 2 3 }, { 2 3 , 1 3 } for N = 2. Performing a grid search on ten weights (0.1-0.9 with a step of 0.1) confirmed the optimality of the previous choice. An example of pre-term b1000 data for a weight of 0.5 is shown in Figure 2 (Testing). Similarly, the same b0 network was also used to enhance the through-plane resolution of fetal b0 and b700 volumes. Finally, since the network outputs were normalized between 0 and 1, histogram normalization to the weighted average of the input images was performed.
Pre-term Newborns
Our network was separately tested on b0 images and the 88 volumes of b1000 using an 8-fold cross validation where 7-folds contain four subjects and 1-fold contains three subjects. We removed N intermediate slices (N = [1,2]) from the testing set volumes in alternating order and used the (weighted) average latent space feature maps of the to-be adjacent slices to encode the N missing slice(s) using the autoencoder (Figure 2, Testing). The resulting latent representation was then decoded to predict the N slices in the voxel space, which were compared to the previously removed N slices, i.e., the ground truth (GT). The same N slices were also generated using three baseline approaches: trilinear, tricubic, and B-spline of 5 th order interpolations [using Tournier et al. (56) and Avants et al. (49)] for comparison. We denote them, respectively, for removing one or two slices: Linear-1, Cubic-1, Spline-1 and Linear-2, Cubic-2, Spline-2.
Latent space exploratory analysis -In order to have an intuitive idea of the latent space representation, we have compared the latent space representations between different gradient directions of all possible pairs from the 88 volumes of the b1000 4D volume. As two volumes with closely aligned gradient directions are more similar than two volumes with orthogonal directions, we aimed to check whether this property is globally preserved in the latent encoding of our input images.
Robustness to noise -We have added different low levels of Rician noise (57) to the original signal as follows: for each pixel with a current intensity S clean , the new intensity S noisy = (S clean + GN 1 ) 2 + GN 2 2 , where GN 1 and GN 2 are random numbers sampled from a Gaussian distribution with zero mean and a SD of S clean (b = 0)/SNR out and SNR out is the desired SNR we aim to simulate. Three SNRs of {27, 25, 23} and {20, 16, 13} were simulated for b0 and b1000, respectively. We have used higher noise levels for b1000 to better simulate the inherently lower SNR in this configuration.
Scalar maps -By merging the b0 and b1000 using the autoencoder enhancement, we reconstructed FA, MD, axial diffusivity (AD), and radial diffusivity (RD) from DTI using Dipy (58) separately for AE-1 or AE-2, i.e., where we, respectively, remove one (N = 1) or two slices (N = 2). We further subdivided the computation in specific brain regions (cortical gray matter, white matter, corpus callosum, and brainstem as provided by the dHCP). Region labels were upsampled and manually refined to match the super-resoluted/interpolated volumes. We performed similar computation of the diffusion maps generated using the trilinear, tricubic, and B-spline interpolated signals.
Fetal
For each subject and each 3D volume (b0 or DW-MRI), we generated one or two middle slices using the autoencoder, hence synthetically enhancing the resolution from 1 x 1 x 4-5 mm 3 to a simulated resolution of 1 x 1 x 2-2.5 mm 3 and 1 x 1 x 1.33-1.67 mm 3 , respectively. We then generated whole-brain DTI maps (FA, MD, AD, and RD) and showed the colored FA. Splenium and genu structures of the corpus callosum were additionally segmented on FA maps for subjects in which these structures were visible. The mean FA and MD were reported for these regions for original and autoencoder enhanced volumes.
Quantitative Evaluation
Raw diffusion signal -We computed the voxel-wise error between the raw signal synthesized by the autoencoder and the GT using the mean squared error (MSE) and the peak SNR (PSNR). We compared the autoencoder performance with the three baseline approaches: trilinear, tricubic, and B-spline of 5 th order interpolations.
Latent space exploratory analysis -We have computed the average squared Euclidian voxel-wise distance between slices of all 3D b1000 volume pairs. This was performed both at the input space and at the latent space representation. The images were flattened from 2D to one-dimensional vectors and compared as follows: Where u and v are the vectors to be compared for all the n corresponding pixels. The final distance between each two 3D volumes is the average distance of all 2D distance computed in 1.
Robustness to noise -We computed with respect to the GT signal, the error of the signal with noise, and the output of the autoencoder using the signal with noise as input. We compared the results using MSE separately for b0 and b1000.
Scalar maps -We computed the voxel-wise error between the diffusion tensor maps reconstructed with the GT and the one by merging the b0 and b1000 using the autoencoder enhancement. We computed the error separately using either AE-1 or AE-2. We used the MSE and the PSNR as metrics and the same diffusion maps generated using the trilinear, tricubic, and Bspline interpolated signal as a baseline. Moreover, we qualitatively compare colored FA generated using the best baseline method, autoencoder, and the GT.
Pre-term Newborns
First, we inspected the latent space and how the 88 DW-MRI volumes are encoded with respect to each other. We can notice in Figure 3 (right panel) that as two b-vectors' angle approaches orthogonality (90 • ), the difference between the latent representations of their corresponding volumes increases. On the contrary, the difference decreases the more the angle tends toward 0 • or 180 • . Although the pattern is more pronounced in the input space (Figure 3, left panel), this trend is a fulfilled necessary condition to the generation for coherent representations of the input data by our network.
Moreover, our network that was exclusively trained on b0 images was able to generalize to b1000. In fact, the signal similarity between b0 and DW images was also used in Coupé et al. (30) in an inverse problem paradigm in which a b0 prior was incorporated to reconstruct b700 volumes. Figure 4 illustrates qualitative results and absolute errors for N = 1 with respect to the GT (right) between the best interpolation baseline (trilinear, left) and the autoencoder enhancement (middle) for b1000. We overall saw from these representative examples, higher absolute intensities in the Linear-1 configuration than in the AE-1. However, ventricles are less visible when using an autoencoder. We hypothesize this is because of their higher intensity in b0 images on which the network was trained.
The average MSE with respect to the original DW-MRI signal within the whole brain is shown in Figure 5 for both the autoencoder enhanced volume and the baseline methods (trilinear, tricubic, and B-spline), for the configurations where one (Method-1) or two (Method-2) slices were removed. The first observation was the expected higher error for the configuration FIGURE 5 | Mean squared error (MSE) between the three baseline methods (linear, cubic, and B-spline 5 th order) and autoencoder (AE) enhancement both for b0 (left) and b1000 (right). Two configurations were assessed: either N = 1, i.e., removing one slice and interpolating/synthesizing it (Linear-1, Cubic-1, Spline-1, AE-1) or N = 2, i.e., the same approach with two slices (Linear-2, Cubic-2, Spline-2, and AE-2). The autoencoder has a significantly lower MSE when compared to each respective best baseline method (paired Wilcoxon signed-rank test p < 1.24e-09). where two slices are removed (N = 2), independently of the method used. Additionally, the autoencoder enhancement clearly outperformed the baseline methods in all configurations (paired Wilcoxon signed-rank test p < 1.24e-09). Particularly, the more slices we remove, the higher the gap between the baseline interpolation methods and the autoencoder enhancement. For b0, the MSE gain was around 0.0005 for N = 1 and 0.0015 for N = 2 between the autoencoder and the average baseline method (Spline-1 v.s. AE-1 and Linear-2 v.s AE-2). For b1000, the gain between AE-1 and Cubic-1 was 0.0007 and 0.0015 between AE-2 and Cubic-2.
The overperformance of the autoencoder is also shown overall in the DTI maps, where MD, AD, and RD were better approximated when compared to the best baseline method (linear interpolation), particularly in the configuration where two slices were removed (Figure 6). However, the FA showed the opposite trend, especially for the configuration, where one slice was removed (AE-1 v.s. Linear-1). However, FA N = 2). Comparing the DTI maps of the merged brain region labels, we found that the AE-2 significantly outperforms other conventional methods for MD, RD, and AD. (Paired Wilcoxon signed-rank test: **p < 0.0018 and *p < 0.017). for white matter-like structures ("WM", corpus callosum, and brainstem) showed higher performance with the autoencoder as depicted for each structure in Figure 7. In fact, by plotting colored FA for these two configurations, we observed that the autoencoder generates tracts that were consistent with the GT. For instance, autoencoder enhancement showed higher frequency details around the superficial WM area (Figure 8, top row) and removed artifacts between the internal capsules better than the linear method (Figure 8, bottom row). However, in some cases, the baseline method better depicted tracts such as in the corpus callosum (Figure 8, middle row).
ODFs generated using spherical harmonics order 8 are also depicted in Supplementary Figure S3 where the autoencoder enhanced data show little qualitative differences with the GT ODFs. Figure 9 shows similar comparisons for MD in different brain regions between the baseline method (Linear), the autoencoder, and the GT. Overall, quantitatively, for structures in the case where two slices were removed, the autoencoder enhancement outperformed the best baseline method in 15 out of 16 configurations (Figure 7). However, it is not always the case when one slice is removed, such as in the AD of the brainstem. 10 | Mean squared error between noisy images and the GT vs. encoded-decoded noisy images and the GT. SNR_out is the desired SNR of the output in the Rician noise formula (Subsection 2.3.1). We notice the robustness of the autoencoder to growing levels of noise both for b0 images (left) and b1000 images (right). Figure 10 shows how our autoencoder was robust to reasonable amounts of noise. In fact, simply encoding and decoding the noisy input generates a slice that was closer to the GT than the noisy slice, as depicted for different levels of noise for both b0 and b1000. Table 2). Note the severe signal drop in the seventh direction because of motion. Figure 11 illustrates inter-volume motion between five diffusionweighted volumes where we also notice a severe signal drop in the seventh direction (sub-4, 23 GW).
Fetuses
The autoencoder trained on pre-term b0 images was able to coherently enhance fetal acquisitions both at b0 and DW-MRI volumes at b700. The network was able to learn low-level features that could generalize over anatomy, contrast, and bvalues. Corresponding FA and colored FA for a still subject (sub-1, 35 GW) are illustrated in Figure 12 (top) where we clearly see the coherence of the two synthesized images as we go from one original slice to the next one. In fact, both the corpus callosum and the internal/external capsules follow a smooth transition between the two slices. Similarly, Figure 12 (bottom) exhibits MD and FA for a moving subject (sub-4, 23 GW) where we also notice, particularly for the MD, the smooth transition between the originally adjacent slices. FA and MD for the remaining subjects are shown in Supplementary Figure S4. Tractography on a fetal subject (sub-1, 35 GW) using both the original and autoencoder enhancement AE-1 DW-MRI is shown in Supplementary Figure S5.
The splenium and genu of the corpus callosum were only sufficiently visible in the three late GW subjects (sub-1, sub-2, and sub-6). Figure 13 shows quantitative results for FA and MD in the two structures. Both maps fall into the range of reported values in the literature (59) for the respective gestational age, for original and autoencoder enhanced volumes.
DISCUSSION
In this work, we have shown that (1) autoencoders can be used for through-plane super-resolution in diffusion MRI, (2) training on b0 images can generalize to gradient diffusion volumes of different contrasts, and (3) as a proof of concept, training on pre-term anatomy can generalize to fetal images.
In fact, we have demonstrated how autoencoders can realistically enhance the resolution of DW-MRI perinatal brain images. We have compared it to conventionally used methods such as trilinear, tricubic, and B-spline interpolations both qualitatively and quantitatively for pre-term newborns of the dHCP database. Resolution enhancement was performed at the diffusion signal level and the downstream benefits propagated to the DTI maps.
Additionally, our network that was solely trained on nondiffusion weighted images (b0) was able to generalize to a b1000 contrast. In fact, the most intuitive approach is to infer b1000 images using a network trained on b1000. We have indeed tried but the network did not converge for the majority of the folds. This might be due to the high variability of b1000 images across directions and their inherently low SNR. However, in the 1fold that the network converged, it slightly underperformed the network that was trained on b0 only, on both b1000 pre-term and b700 fetal images. Moreover, being b-value independent is a desirable property since different b-values are used in different centers, in particular for clinical fetal imaging (400, 500, 600, 700 s/mm 2 ) (6,10,12,29,60). In fact, the same b0 network trained on pre-term data was generalized to b700 fetal images where we qualitatively show its advantage, hence supporting the utility of pre-term data for fetal imaging, such as in Karimi et al. (61), where they have used pre-term colored FA and DW-MRI fetal scans to successfully predict fetal colored FA using a convolutional neural network. Furthermore, FA and MD of the corpus callosum, which were generated using the autoencoder enhanced volumes, are in the range of values provided by a recent study (59). This is a necessary but non sufficient condition for the validity of our framework in fetal data.
Notably, our trained network was able to reduce the noise from the data by learning the main features across images for different noise levels. This can be explained by two points. First, our autoencoder was exposed to different low levels of noise (as the dHCP data was already denoised) and hence the encoded features of the latent space are ought to be noise independent. Second, generative autoencoders intrinsically yield high SNR outputs due to the desired smoothness property of the latent space (62).
The proposed framework could be applied to correct for anisotropic voxel sizes and can be used for slice outliers recovery in case of extreme motion artifacts for example. In fact, the artificially removed middle slices in our experiments can represent corrupted slices that may need to be discarded or replaced using interpolation (36)(37)(38). Our autoencoder can hence be used to recover these damaged slices using neighboring ones.
The power of our method compared to conventional interpolations resides in two points. First, the amount of data used to predict/interpolate the middle slice. While only two slices will be used in traditional interpolation approaches, our method will in addition take advantage of the thousands of slices to which the network has been exposed and from which the important features have been learned (without any supervision) in the training phase. Second, based on the manifold hypothesis, our method performs interpolations in the learned encoding space, which is closer to the intrinsic dimensionality of the data (63), and hence all samples from that space will be closer to the true distribution of the data compared to a naive interpolation in the pixel/voxel space.
Although our network performed quantitatively better than conventional interpolation methods in pre-term subjects, its output is usually smoother and hence exhibits lesser details. This is a well-known limitation of generative autoencoders, such as variational autoencoders, and the consequence of the desirable property of making the latent space smooth (62). Generative Adversarial Networks (64) can be an interesting alternative to overcome this issue. However, they have other drawbacks as being more unstable and less straightforward to train (65) than autoencoders. But if trained properly, they can achieve competitive results.
In this work, qualitative results only were provided on fetal DW-MRI. We are limited by the lack of ground truth in this domain, hence our results are a proof of concept. The future release of the fetal dHCP dataset will be very valuable to further develop our framework and proceed to its quantitative assessment for fetal DW-MRI.
In future work, we want to add random Rician noise in the training phase to increase the network robustness and predictive power. We also want to extend the autoencoding to the angular domain by using spherical harmonics decomposition for each 4D voxel and hence enhancing both spatial and angular resolutions (66).
Although unsupervised learning via autoencoders has been recently used in DW-MRI to cluster individuals based on their microstructural properties (67), this is to the best of our knowledge, the first unsupervised learning study for super-resolution enhancement in DW-MRI using autoencoders.
As diffusion fetal imaging suffers from low throughplane resolution, super-resolution using autoencoders is an appealing method to artificially but realistically overcome this caveat. This can help depict more precise diffusion properties through different models, such as DTI or ODFs, and potentially increase the detectability of fiber tracts that are relevant for the assessment of certain neurodevelopmental disorders (29).
DATA AVAILABILITY STATEMENT
Part of the analyzed datasets were publicly available. This data can be found here: http://www.developingconnectome.org/datarelease/data-release-user-guide/.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Cantonal Ethical Committee, Zürich. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin.
AUTHOR CONTRIBUTIONS
HK performed the technical analysis, wrote the manuscript, provided original idea, and integrated all revisions. EC-R and HK contributed to the conceptualization of the research project. EC-R, PD, GG, AJ, and HL revised the manuscript. HL helped in the data generation. PD helped in the technical analysis. GG, MK, and YA-G helped in the processing of the fetal data. YA-G and MK acknowledged the manuscript. AJ provided the fetal data. MB conceptualized, designed and supervised the research project, contributed to the manuscript and to the final revision, and provided funding. All authors contributed to the article and approved the submitted version. | 7,975.8 | 2021-12-07T00:00:00.000 | [
"Computer Science"
] |
PtRu / C Electrocatalysts Prepared Using Gamma and Electron Beam Irradiation for Methanol Electrooxidation
PtRu/C electrocatalysts (carbon-supported PtRu nanoparticles) were prepared in a single step submitting water/2-propanol mixtures containing Pt(IV) and Ru(III) ions and the carbon support to gamma and electron beam irradiation. The electrocatalysts were characterized by energy dispersive X-ray analysis (EDX), X-ray diffraction (XRD), transmission electron microscopy (TEM), and cyclic voltammetry and tested for methanol electrooxidation. PtRu/C electrocatalyst can be prepared in few minutes using high dose rate electron beam irradiation while using low dose rate gamma irradiation some hours were necessary to prepare it. The obtained materials showed the face-centered cubic (fcc) structure of Pt and Pt alloys with average nanoparticle sizes of around 3 nm. The material prepared using electron beam irradiation was more active for methanol electrooxidation than the material prepared using gamma irradiation.
Introduction
Fuel cells convert chemical energy directly into electrical energy with high efficiency.However, the use of hydrogen as a fuel presents problems, principally with storage for mobile and portable applications [1][2][3].Thus, there has been an increasing interest in the use of alcohols directly as fuel (Direct Alcohol Fuel Cell-DAFC).Methanol has been considered the most promising alcohol and carbon-supported PtRu nanoparticles (PtRu/C), normally with a Pt : Ru atomic ratio of 50 : 50, the best electrocatalyst [4].However, the catalytic activity of PtRu/C electrocatalysts strongly depends on the method of preparation and it is one of the major topics studied in direct methanol fuel cells (DMFC) [4,5].PtRu/C electrocatalysts are produced mainly by impregnation and colloids methods.Although impregnation method is a simple procedure, the major drawback is the difficulty in controlling nanoparticle size and distribution.The colloidal methods have the advantage to produce very small and homogeneously distributed carbon-supported metal nanoparticles; however, the methodologies are very complex [4].
Lately radiation-induced reduction of metal ion precursors in solution has been described to prepare carbon-supported metal nanoparticles for fuel cell applications.Despite the complexity and cost of electron beam or gamma irradiation facilities, the methodologies used to prepare the electrocatalysts are easy to perform [6][7][8][9][10].Le Gratiet et al. [6] prepared platinum nanoparticles submitting a K 2 PtCl 4 salt dissolved in a CO-saturated water/2-propanol solvent to gamma irradiation.The reduction of platinum ions occurred by a combined effect of CO and radicals produced by radiolysis, leading to the formation of platinum nanoparticles of 2-3 nm diameter that were further impregnated on the carbon support.These catalysts were found to be effective for methanol or hydrogen electrooxidation.Oh et al. [7] prepared Pt-Ru alloy particles dispersed on various carbon structures in water/2-propanol using gamma irradiation, but no tests for DMFC were described using the obtained materials.Wang et al. [8] prepared Pt nanoparticles irradiating an aqueous solution of chloroplatinic acid in the presence of 2propanol as a radical scavenger and sodium sulfonate as a surfactant.The synthesized Pt nanoparticles (2.5-4.0 nm) were further impregnated on multiwalled carbon nanotubes.The obtained material was tested on a single proton exchange membrane fuel cell operating with H 2 /O 2 , and the results showed that the electrocatalysts were very promising.Silva et al. [9] prepared PtRu/C electrocatalysts in a single step submitting water/ethylene glycol solutions containing Pt(IV) and Ru(III) ions and the carbon support to gamma irradiation at room temperature under stirring.The obtained carbon-supported PtRu nanoparticles showed mean particle sizes of 2.5-3.0 nm and were very active for methanol oxidation.Recently, Chai et al. [10] prepared Pt (80 wt%) supported on a mesoporous carbon support in a single step.The Pt salt was dissolved in a solution of water/2-propanol, and the carbon support was added to the solution.The mixture was irradiated at room temperature under stirring.The obtained material exhibited enhanced catalytic activity towards the oxygen reduction reaction (ORR).In this work, PtRu/C electrocatalysts were prepared using high dose rate electron beam and low dose rate gamma irradiation and were tested for methanol electrooxidation.
Experimental
PtRu/C electrocatalyst (20 wt%, Pt : Ru atomic ratio of 50 : 50) was prepared using H 2 PtCl 6 •6H 2 O (Aldrich) and RuCl 3 •1.5H 2 O (Aldrich) as metal sources, which were dissolved in water/2-propanol solution (25/75, v/v).After this, the carbon Vulcan XC72R, used as a support, was dispersed in the solution using an ultrasonic bath.The resulting mixture (dissolved metal ions and the carbon support) was submitted to gamma irradiation ( 60 Co source, dose rate of 0.5 kGy h −1 ) under stirring at room temperature for 6 h (total dose of 3 kGy).After irradiation, the mixture was filtered, and the solid PtRu/C electrocatalyst was washed with water and dried.In a similar way, the resulting mixture was submitted under stirring at room temperature to an electron beam source for 3 min (Electron Accelerator's Dynamitron job 188-IPEN/CNEN-SP, dose rate 5760 kGy h −1 , total dose of 288 kGy) [11].
The Pt : Ru atomic ratios were determined by semiquantitative EDX analysis using a Philips XL30 scanning electron microscope with a 20 keV electron beam and provided with EDAX DX-4.
XRD analysis was performed using a Rigaku diffractometer model Miniflex II using Cu Kα radiation source (λ = 0.15406 nm).The diffractograms were recorded from 2θ = 20 • to 90 • with a step size of 0.05 • and a scan time of 2 s per step.The average crystallite size was calculated using Scherrer equation [12].
Transmission electron microscopy (TEM) was carried out using a JEOL JEM-2100 electron microscope operated at 200 kV.The particle distribution histogram was determined by measuring 150 particles from micrograph.
Electrochemical studies of the electrocatalysts were carried out using the thin porous coating technique [13].An amount of 20 mg of the electrocatalyst was added to a solution of 50 mL of water containing 3 drops of a 6% polytetrafluoroethylene (PTFE) suspension.The resulting mixture was treated in an ultrasound bath for 10 min, filtered, and transferred to the cavity (0.30 mm deep and 0.36 cm 2 area) of the working electrode.The quantity of electrocatalyst in the working electrode was determined with an accuracy of 0.0001 g using an analytical balance.In cyclic voltammetry experiments the current values (I) were normalized per gram of platinum (A g Pt −1 ).The quantity of platinum was calculated considering the mass of the electrocatalyst present in the working electrode multiplied by its percentage of platinum.The reference electrode was a RHE, and the counter electrode was a Pt plate.Electrochemical measurements were made using a Microquimica (model MQPG01, Brazil) potentiostat/galvanostat coupled to a personal computer and using the Microquimica software.Cyclic voltammetry was performed in a 0.5 mol L −1 H 2 SO 4 solution saturated with N 2 .Methanol oxidation was performed at 25 • C using 1.0 mol L −1 of methanol in 0.5 mol L −1 H 2 SO 4 .For comparative purposes, a commercial PtRu/C E-TEK (20 wt%, Pt:Ru molar ratio 50 : 50, Lot # B0011117) was used.
Results and Discussion
Electron beam or gamma irradiation of a water solution containing metals causes the ionization and excitation of water, forming the species shown in (1) [14].
The aqueous solvated electrons, e aq − , and H • atoms are strong reducing agents and reduce metal ions down to the zero-valent state ((2) and ( 3)).
M + + e aq
− −→ M 0 , Similarly, multivalent ions, like Pt(IV) and Ru(III), are reduced by multistep reactions.However, OH • radicals could oxidize the ions or the atoms into a higher oxidation state and thus counterbalance the reduction reactions, (2) and ( 3).An OH • radical scavenger is therefore added to the solution, in this case 2-propanol, which reacts with these radicals leading to the formation of radicals exhibiting reducing power that are able to reduce metal ions ((4) and ( 5)) [14].
In this manner, the atoms produced by the reduction of metals ions progressively coalesce leading to the formation of carbon-supported PtRu nanoparticles (PtRu/C electrocatalyst).The results of PtRu/C electrocatalysts preparation using electron beam and gamma irradiation are shown in Table 1.
The water/2-propanol solution containing Pt(IV) and Ru(III) ions used in the preparation of PtRu/C electrocatalysts showed a dark brown color before the addition of the carbon support and irradiation.After irradiation and separation of the solid (PtRu/C electrocatalyst) by filtration, the reaction medium becomes colorless suggesting that all of the Pt(IV) and Ru(III) ions were reduced.To confirm this assumption, a qualitative test using potassium iodide [15] did not detect Pt ions in the filtrates, which suggest that all Pt(IV) ions were reduced to metallic Pt.As no Pt ions were not detected in the filtrates, and the obtained Pt : Ru atomic ratios were similar to the nominal ones (Table 1), it was considered that both electrocatalysts were obtained with 20 wt% of metal loading.Using low dose rate gamma irradiation the total reduction of metal ions was observed only after 6 h of irradiation.On the other hand, only 3 min were necessary to observe the total reduction of the metal ions using high dose rate electron beam irradiation.The X-ray diffractograms of Pt/C and PtRu/C electrocatalysts are prepared using electron beam, and gamma irradiation are shown in Figure 1.
The X ray diffractograms showed a broad peak at about 25 • , which was associated to the Vulcan XC72R support material and five diffraction peaks at about 2θ = 40 • , 47 • , 67 • , 82 • , and 87 • that were associated to the (111), ( 200), ( 220), (311), and (222) planes, respectively, which are characteristic of the face-centered cubic (fcc) structure of platinum and platinum alloys [16].No peaks, which could be attributed to metallic ruthenium or to materials rich in ruthenium with a hexagonal structure, were observed in the XRD patterns.On the other hand, the presence of these species as amorphous materials cannot be discarded.The X-ray diffractogram of PtRu/C electrocatalyst prepared using electron beam irradiation showed the diffraction peaks of fcc phase shifted to higher angles with respect to those of Pt/C electrocatalyst, indicating a lattice contraction and some alloy formation.This was not observed for the PtRu/C electrocatalyst prepared using gamma irradiation.The (220) reflection of Pt fcc structure was used to calculate the average crystallite sizes according to Scherrer equation and for both electrocatalysts the calculated values were about 3 nm.
TEM micrographs and the corresponding particle size distribution histograms of the PtRu/C electrocatalysts are prepared using gamma and electron beam irradiations are shown in Figures 2(a) and 2(b), respectively.It can been seen for both electrocatalysts that the nanoparticles were homogeneously distributed on the carbon support, and the mean particle sizes were around 3 and 2.5 nm for the materials obtained using gamma and electron beam irradiation, respectively.
The cyclic voltammograms in acid medium of the PtRu/ C electrocatalysts are shown in Figure 3.
The cyclic voltammograms (CV) of both PtRu/C electrocatalysts do not have a well-defined hydrogen adsorptiondesorption region (0-0.4V) and show an increase of the current values in the double-layer region (0.4-0.8 V) when compared to the CV of Pt/C electrocatalyst [17].The increase of current values in the double region was attributed to the capacitive currents and redox process of ruthenium oxides [17,18].However, comparing the CVs of both PtRu/C electrocatalysts, it is observed for the material prepared using electron beam irradiation a more defined hydrogen region when compared to the material prepared using gamma irradiation.On the other hand, the material prepared using gamma irradiation showed the double layer region more pronounced.This could suggest that the material prepared using electron beam irradiation has a surface more enriched in Pt while the material prepared using gamma irradiation has a surface more enriched in Ru.
The electrooxidation of methanol was studied by cyclic voltammetry in 1 mol L −1 methanol in 0.5 mol L −1 H 2 SO 4 (Figure 4).
The electrooxidation of methanol started only at about 0.45 V for the PtRu/C electrocatalyst was prepared using gamma irradiation, and the current values were lower than those observed for the PtRu/C electrocatalyst prepared using an electron beam.For the latter electrooxidation started at about 0.35 V, and the performance of this catalyst was very similar to the commercial PtRu/C electrocatalyst from E-TEK.Studies have shown that the maximum activity for methanol oxidation at room temperature could be obtained using PtRu/C electrocatalysts with low Ru coverage [19][20][21].
The obtained results could be explained by the different dose rates of electron beam and gamma radiation and to different reduction potentials of Pt(IV) and Ru(III) ions.Using electron beam irradiation (high dose rate), the reduction of Pt(IV) and Ru(III) ions proceeds very quickly and enhances the probability of alloying, as confirmed by XRD measurements.Thus, the carbon-supported PtRu nanoparticles obtained using electron beam seem to have a more homogeneous distribution of Pt and Ru atoms on the nanoparticles surface.On the other hand, at low dose rate (gamma source) it seems that the Pt(IV) ions were reduced before the Ru(III) ions.In this case, Ru atoms deposit preferentially on the presupported Pt nanoparticles and the resulting carbon-supported PtRu nanoparticles have a Ru-rich surface.Another possibility is that Pt(IV) and Ru(III) ions were reduced with equal probabilities by radiolytic radicals, but a further electron transfer from the less noble metal atom, Ru, to the more noble metal ion, Pt(IV), could also result in the formation of carbon-supported PtRu nanoparticles with the surface enriched by Ru atoms, which could explain the low activity of this sample for methanol electrooxidation.
Conclusions
An active PtRu/C electrocatalyst for methanol oxidation was easily obtained in a single step within a few minutes using electron beam irradiation.The PtRu/C electrocatalysts showed the typical fcc structure of platinum and platinum alloys with average particle sizes of 2.5 nm.At room temperature, the material prepared using electron beam irradiation has a similar methanol oxidation performance as that of a commercial PtRu/C electrocatalyst.
Figure 1 :
Figure 1: X-ray diffractograms of Pt/C and PtRu/C electrocatalysts prepared using electron beam and gamma irradiation.
Figure 2 :
Figure 2: TEM micrographs and particle size distributions of PtRu/C electrocatalysts prepared using (a) gamma and (b) electron beam irradiaton.
Table 1 :
Influence of electron beam and gamma irradiation on Pt : Ru atomic ratio and average crystallite size of the PtRu/C electrocatalysts (20 wt% of metals, nominal Pt : Ru atomic ratio of 50 : 50, water/2-propanol volumetric ratio of 25/75). | 3,290 | 2012-01-01T00:00:00.000 | [
"Materials Science"
] |
Polymer Nanocomposite Graphene Quantum Dots for High-Efficiency Ultraviolet Photodetector
Influence on photocurrent sensitivity of hydrothermally synthesized electrochemically active graphene quantum dots on conjugated polymer utilized for a novel single-layer device has been performed. Fabrications of high-performance ultraviolet photodetector by depositing the polypyrrole-graphene quantum dots (PPy-GQDs) active layer of the ITO electrode were exposed to an Ultraviolet (UV) source with 265 and 355 nm wavelengths for about 200 s, and we examined the time-dependent photoresponse. The excellent performance of GQDs was exploited as a light absorber, acting as an electron donor to improve the carrier concentration. PGC4 exhibits high photoresponsivity up to the 2.33 µA/W at 6 V bias and the photocurrent changes from 2.9 to 18 µA. The electrochemical measurement was studied using an electrochemical workstation. The cyclic voltammetry (CV) results show that the hysteresis loop is optically tunable with a UV light source with 265 and 355 nm at 0.1 to 0.5 V/s. The photocurrent response in PPy-GQDs devices may be applicable to optoelectronics devices.
Introduction
In the past few years, Graphene quantum dots (GQDs) have attracted significant attention due to their unique characteristics such as highly tunable photoluminescence (PL) [1], good solubility in water, superior biocompatibility [2], large-scale production, and low cost. GQDs are one of the most significant zero-dimensional materials due to their prominent electronic [3] and optical properties [4]. A unique characteristic of the GQDs is the formation of many layers and their size being less than 30 nm. GQDs also drew attention to optoelectronic [5] applications due to their exemplary fluorescence behavior, charge induced by the quantum confinement effect [6], and edge effect [7,8]. The evolution of quantum dots dispersion in the polymer matrix is an important aspect that is significantly studied. Embedding quantum dots in polymeric matrices can enhance their stability and end agglomeration too. It shows wide absorption spectra ranging from UV to visible wavelengths. Furthermore, GQDs are expected to show better biocompatibility than other inorganic semiconductor nanoparticles. Hence, it is a motive to study the photoresponsive properties in the conducting polymer-graphene quantum dots nanocomposites [9]. Among various conducting polymers, polypyrrole (PPy) is a unique property with simple synthesis, low cost, mass production, high electrical conductivity, and high charge transfer resistance. It is a conjugated polymer with strong absorption in the visible region and is environmentally stable. However, polypyrrole applications are limited due to poor processability, mechanical properties, and solubility [10,11]. The modification of conductive polymer with carbon-based material is essential because the carbon material shows improved mechanical, electrical, and electrochemical properties compared with the conducting polymers alone, leading to a wide variety of applications, including sensors, catalysis, and energy storage. Inorganic colloidal quantum dots have higher stability under ambient conditions, good optical properties, and great potential of composites that have demonstrated highperformance UV photodetectors at a low cost with facile fabrication. Based on the work by W. Xu et al. [12] proposed that the photodiode based on ZnO-GQDs/PVA nanocomposite shows a 0.06 s short rise time with a responsivity of 46.5 A/W. Eunhee Park et al. [13] presented the Poly (9-vinyl carbazole)/SnO 2 quantum dot heterojunction-based self-powered high-performance ultraviolet photodetector. The device demonstrated exceptional responsiveness (49.6 mA W1 at 254 nm and 166 mA W1 at 220 nm) and detectivity (2.16 1010 Jones at 254 nm) under optimal conditions. A hybrid photodetector constructed by the poly (3-hexylthiophene-2,5-diyl) (P3HT) bulk heterojunction (BHJ) composite is blended with the photoactive layer of lead sulfide QDs (PbS QDs) to produce large interfacial charge separation resulting in improved photocurrent of the hybrid photodetector. A highperformance hybrid photodetector of CuInSe2 nanocrystal and poly (3-hexylthiophene) exhibits a detectivity greater than 1010 cm Hz/W [14]. Dongyang Zhu et al. [15] proposed recent developments in infrared photodetectors based on polymers. Polymer materials are viable for a new generation of IR PDs due to their designable molecular structure, solution processability, large-area preparation, and inherent flexibility. The performance of PDs is improved, and their application scenarios are broadened by various molecular design methodologies and cutting-edge device designs, which further encourages the rapid development of IR PDs. Baosen Zhang et al. [16] reported that broadband PDs based on perovskites that have been solution-processed at ambient temperature and feature highly electrically conductive PbSe quantum dots (QDs). The solution-processed BHJ broadband PDs exhibit responsiveness of 10 mA/W, a detectivity of 1011 Jones, and a linear dynamic range of 53 dB in the spectral response ranging from 350 nm to 2500 nm.
In this work, we demonstrate a facile, low-cost synthesis in which the PPy-GQD composite can be prepared through a chemical oxidation polymerization process. It is essential to make a strategy to synthesize suitable size of graphene quantum dots, and uniform distribution to address the current challenges because these features play a significant task in good reversibility and response speed. The luminescence of the GQDs was found to be tunable by varying the concentration. Furthermore, the electrical hysteresis behavior of the composite was observed. The hysteresis effect of the device can be tuned by the concentration and size of the GQDs. We report a novel UV photodetector fabricated from PPy-GQDs composites. Because of the large band gap of GQDs, the devices could detect UV light with wavelengths of 265 and 355 nm at 0.1 to 0.5 V/s. This work may open a wide range of applications for GQD composite-based light-sensitive devices.
Synthesis of Graphene Quantum Dots
The graphene quantum dots were prepared by the one-step hydrothermal method. Where 3 wt% of glucose powder was dissolved in 100 mL of deionized water, and it was sonicated for 20 min after complete dissolution [17]. This mixture was poured into 100 mL of Teflon-lined stainless steel autoclave and heated at 180 • C for 3 h; the resultant product has changed its color from transparent to pale yellow. The as-prepared solution was centrifuged at 3000 rpm for 30 min, and the resulting yellow solution is graphene quantum dots. PPy-GQDs (PGC) composites were obtained. The 20 and 40 mL graphene quantum dots present in the polypyrrole were coded as PGC2 and PGC4.
Fabrication of a Single Layer Device
The ITO-coated glass substrates were cleaned ultrasonically in acetone and methanol at 27 • C for 10 min and were rinsed in deionized water. The chemically-cleaned ITO-coated glass substrates were dried by using N 2 gas with a purity of 99.9999%. The PPy-GQDs is an active layer deposited by simple brush coating on ITO, dried at 60 • C for 15 min. The silver paste was applied by a doctor blade method on the top of the device with an effective area of 1 cm 2 . The fabricated device having a thickness of 91.3 nm (PGC2) and 84.7 nm (PGC4) was examined for CV and IT measurement under UV light. The scheme of the PPy-GQDs synthesis and fabrication of the device is cited in Scheme 1.
Synthesis of Polypyrrole-Graphene Quantum Dots Composite
A 0.3 M of pyrrole monomer and 0.7 M FeCl3 solution was mixed in 100 mL of 1 M HCl with continuous stirring. The 20 and 40 mL of GQDs were added to the above solution; the solution was heated at 60 °C for 20 min. 100 mL of 0.7 M FeCl3 was poured into the above solution and kept for polymerization (0 °C for 24 h), centrifuged at 15 min, and PPy-GQDs (PGC) composites were obtained. The 20 and 40 mL graphene quantum dots present in the polypyrrole were coded as PGC2 and PGC4.
Fabrication of a Single Layer Device
The ITO-coated glass substrates were cleaned ultrasonically in acetone and methanol at 27 °C for 10 min and were rinsed in deionized water. The chemically-cleaned ITOcoated glass substrates were dried by using N2 gas with a purity of 99.9999%. The PPy-GQDs is an active layer deposited by simple brush coating on ITO, dried at 60 °C for 15 min. The silver paste was applied by a doctor blade method on the top of the device with an effective area of 1 cm 2 . The fabricated device having a thickness of 91.3 nm (PGC2) and 84.7 nm (PGC4) was examined for CV and IT measurement under UV light. The scheme of the PPy-GQDs synthesis and fabrication of the device is cited in Scheme 1.
Results
We report in the following sections several measurements, such as FT-IR, XRD, UV-Visible absorption, PL, cyclic voltammetry, TEM, device photoresponse and photocurrent, Mott Schottky plot, and finally energy level diagram, which serves to highlight the physical and chemical properties of the synthesized composite nanostructure GQDs-based and UV photodetector device.
FT-IR Study
First, we report the FT-IR spectra of the PPy showing the characteristic peak position at 1044 cm −1 attributed to the C-N bond. The peaks observed at 1530 cm −1 and 3732 cm −1 are associated with the pyrrole ring's C-C and N-H stretching vibration [18]. In the composite, these two peaks have been shifted to 1539 cm −1 and 3749 cm −1 due to the interaction between the carboxylic acid group of GQDs and the N-H peptide bond of PPy.
A peak at 2324 cm −1 corresponding to the C-H bending vibration shifted to 2386 cm −1 after the addition of GQDs. The C=O bond observed at 1703 cm −1 is attributed to the characteristic GQDs peak and wave number reduced to 1691 and 1686 cm −1 as well as decreasing the intensity of peaks in PGC2, and PGC4 composites noticed the GQDs dispersion in PPy system, it is not seen in PPy as shown in Figure 1 and has been reported as a characteristic peak of GQDs [19]. Interestingly, this peak downshifted to 1686 cm −1 due to the π-Scheme 1. Schematic diagram of the synthesis of PPy-GQDs composites and fabrication of a singlelayer device.
Results
We report in the following sections several measurements, such as FT-IR, XRD, UV-Visible absorption, PL, cyclic voltammetry, TEM, device photoresponse and photocurrent, Mott Schottky plot, and finally energy level diagram, which serves to highlight the physical and chemical properties of the synthesized composite nanostructure GQDs-based and UV photodetector device.
FT-IR Study
First, we report the FT-IR spectra of the PPy showing the characteristic peak position at 1044 cm −1 attributed to the C-N bond. The peaks observed at 1530 cm −1 and 3732 cm −1 are associated with the pyrrole ring's C-C and N-H stretching vibration [18]. In the composite, these two peaks have been shifted to 1539 cm −1 and 3749 cm −1 due to the interaction between the carboxylic acid group of GQDs and the N-H peptide bond of PPy.
A peak at 2324 cm −1 corresponding to the C-H bending vibration shifted to 2386 cm −1 after the addition of GQDs. The C=O bond observed at 1703 cm −1 is attributed to the characteristic GQDs peak and wave number reduced to 1691 and 1686 cm −1 as well as decreasing the intensity of peaks in PGC2, and PGC4 composites noticed the GQDs dispersion in PPy system, it is not seen in PPy as shown in Figure 1 and has been reported as a characteristic peak of GQDs [19]. Interestingly, this peak downshifted to 1686 cm −1 due to the π-π interaction between the GQDs layer and aromatic PPy ring, elucidated that the interaction of C=O with N-H of polymer, the most probable chemical reaction mechanism, is that shown in Figure 1b. In addition, the peak at 1401 cm −1 is attributed to the skeletal vibration from the graphene domain [20]. Therefore, all these peaks confirmed the PPy-GQDs composite. π interaction between the GQDs layer and aromatic PPy ring, elucidated that the interaction of C=O with N-H of polymer, the most probable chemical reaction mechanism, is that shown in Figure 1b. In addition, the peak at 1401 cm −1 is attributed to the skeletal vibration from the graphene domain [20]. Therefore, all these peaks confirmed the PPy-GQDs composite.
XRD Study
The X-ray diffraction analysis of PPy and PPy-GQDs composites shown in Figure The peak shifted to a higher diffraction angle and increased the broadness with an intensity, which means that the crystalline nature of GQDs affects the amorphous structure of PPy and indicates homogenous dispersion of GQDs onto the PPy matrix [21]. The
XRD Study
The X-ray diffraction analysis of PPy and PPy-GQDs composites shown in Figure 2 elucidate this point. The GQDs exhibit (002) peak related to the interplanar spacing of 0.23 nm, which is less than the 0.33 nm, the interplanar spacing of graphite ( Figure 2a π interaction between the GQDs layer and aromatic PPy ring, elucidated that the interaction of C=O with N-H of polymer, the most probable chemical reaction mechanism, is that shown in Figure 1b. In addition, the peak at 1401 cm −1 is attributed to the skeletal vibration from the graphene domain [20]. Therefore, all these peaks confirmed the PPy-GQDs composite.
XRD Study
The X-ray diffraction analysis of PPy and PPy-GQDs composites shown in Figure The peak shifted to a higher diffraction angle and increased the broadness with an intensity, which means that the crystalline nature of GQDs affects the amorphous structure of PPy and indicates homogenous dispersion of GQDs onto the PPy matrix [21]. The The peak shifted to a higher diffraction angle and increased the broadness with an intensity, which means that the crystalline nature of GQDs affects the amorphous structure of PPy and indicates homogenous dispersion of GQDs onto the PPy matrix [21]. The average chain separation [22] of PPy, GQDs, PGC2, and PGC4 is 4.51 A 0 , 4.58 A 0 , 2.26 A 0 , and 2.17 A 0 , respectively.
UV-Visible Absorption Study
To characterize the optical features of PPy, GQDs, and PGC composites, their UV-Visible absorption spectra are shown in Figure 3. The UV-Visible spectra of the PPy peak observed at 330 nm are ascribed to the π-π* bipolaron transition ( Figure 3a). A typical absorption peak of GQDs at 242 nm is due to the π-π* transition of the aromatic sp 2 domain, and other peaks at 302 nm are attributed to the n-π* transition of C=O. The characteristic peak 242 nm of GQDs has enhanced the intensity and appeared two new peaks 329 and 379 nm in the PGC2 composite due to GQDs confinement effect and oxygen of GQDs interact with the hydrogen of PPy; as a result, promotion of n-π* transition [23]. The PGC4 composite shows a peak at 255, 329 nm ( Figure 3b) derived from the π-π* and n-π* transition of GQDs, and the shift of the peak at 423 nm is attributed to the conjugation between the GQDs and PPy [24]. Figure 3c shows the optical absorption is calculated using the Tauc equation αhν = A (hν − E g ) n [25], where E g is bandgap, α is the absorption coefficient, ν is frequency, A is constant, and depending on the mode of transition, n takes the different values. Here n = 0.5 shows the best fit for optical absorption data. Therefore, amazingly, the material allows the important direct bandgap transition. A plot of (αhν) 2 versus hν ( Figure 3d) and extrapolating the linear portion of the graph on the hν axis gives that the bandgap values are 3.4 eV, 3.28 eV, 2.05 eV, and 1.62 eV for PPy, GQDs, PGC2, and PGC4, respectively. Nanomaterials 2022, 12, x FOR PEER REVIEW 5 of 16 average chain separation [22] of PPy, GQDs, PGC2, and PGC4 is 4.51 A 0 , 4.58 A 0 , 2.26 A 0 , and 2.17 A 0 , respectively.
UV-Visible Absorption Study
To characterize the optical features of PPy, GQDs, and PGC composites, their UV-Visible absorption spectra are shown in Figure 3. The UV-Visible spectra of the PPy peak observed at 330 nm are ascribed to the π-π* bipolaron transition ( Figure 3a). A typical absorption peak of GQDs at 242 nm is due to the π-π* transition of the aromatic sp 2 domain, and other peaks at 302 nm are attributed to the n-π* transition of C=O. The characteristic peak 242 nm of GQDs has enhanced the intensity and appeared two new peaks 329 and 379 nm in the PGC2 composite due to GQDs confinement effect and oxygen of GQDs interact with the hydrogen of PPy; as a result, promotion of n-π* transition [23]. The PGC4 composite shows a peak at 255, 329 nm ( Figure 3b) derived from the π-π* and n-π* transition of GQDs, and the shift of the peak at 423 nm is attributed to the conjugation between the GQDs and PPy [24]. Figure 3c shows the optical absorption is calculated using the Tauc equation αhν = A (hν − Eg) n [25], where Eg is bandgap, α is the absorption coefficient, ν is frequency, A is constant, and depending on the mode of transition, n takes the different values. Here n = 0.5 shows the best fit for optical absorption data. Therefore, amazingly, the material allows the important direct bandgap transition. A plot of (αhν) 2 versus hν ( Figure 3d)and extrapolating the linear portion of the graph on the hν axis gives that the bandgap values are 3.4 eV, 3.28 eV, 2.05 eV, and 1.62 eV for PPy, GQDs, PGC2, and PGC4, respectively. Thus, the electrical conductivity of polypyrrole increases in the presence of GQDs, due to a decrease in the bandgap [26]. To visualize the morphology of the realized composite nanostructures, we show in the next paragraph the impressive FE-SEM images, which clarify the GQDs allocation sites.
FE-SEM Analysis of PPy and the PPy-GQDs Composite
The pure PPy FE-SEM images show massive spherical morphology due to their physical fusing or polymerization among the chains, as shown in Figure 4a. The FESEM of GQDs shows the spherical shape morphology shown in Figure 4b. Figure 4c,d, shows the composite morphology of PGC2 and PGC4, respectively, demonstrate the porous structure [27].
Thus, the electrical conductivity of polypyrrole increases in the presence of GQDs, due to a decrease in the bandgap [26]. To visualize the morphology of the realized composite nanostructures, we show in the next paragraph the impressive FE-SEM images, which clarify the GQDs allocation sites.
FE-SEM Analysis of PPy and the PPy-GQDs Composite
The pure PPy FE-SEM images show massive spherical morphology due to their physical fusing or polymerization among the chains, as shown in Figure 4a. The FESEM of GQDs shows the spherical shape morphology shown in Figure 4b. Figure 4c,d, shows the composite morphology of PGC2 and PGC4, respectively, demonstrate the porous structure [27]. The GQDs initially had infinite oxygen groups such as -COOH and -OH. By increasing the concentration of GQDs, it was unable to provide enough templates for the growth of the pyrrole monomer on its surface [28]. Thus, it is clear that GQDs are embedded in the porous region and flatten the surface of the polymer matrix.
PL Analysis of PPy, GQDs, and PPy-GQDs Composite
The photoluminescence spectra of PPy, GQDs, and PPy-GQDs composite are shown in Figure 5. The excitation wavelength (λex) is 400 nm for GQDs, and the PL peak position of GQDs is found at 462 nm. The PL peak of PPy is at 467 nm, and PGC composites are 523 and 535 nm for PGC2 and PGC4 composites with the same excitation wavelength, respectively, as shown in Figure 5a.
The redshift behavior of the composite was attributed to the strong orbital interaction between the π-conjugated PPy and GQDs [29]. The electron transfer that takes place from polymer to GQDs may appear narrow bandgap, causing the tuning of PL by varying the concentration of GQDs shown in Figure 5b. As increasing the concentration of GQDs, the PL peak moves towards the longer wavelength side from 523 to 535 nm, as shown in Figure 5a, mainly due to the functionalization of GQDs in the PPy matrix. Pure GQDs exhibit exciton-dependent emission with a change in the excitation wavelength from 200 to 600 nm gradually increasing the emission peak intensity [17,30], as shown in Figure 5b. The exciton-dependent PL shift attributed to the inhomogeneities of the energy level originates from the size of GQDs. The GQDs initially had infinite oxygen groups such as -COOH and -OH. By increasing the concentration of GQDs, it was unable to provide enough templates for the growth of the pyrrole monomer on its surface [28]. Thus, it is clear that GQDs are embedded in the porous region and flatten the surface of the polymer matrix.
PL Analysis of PPy, GQDs, and PPy-GQDs Composite
The photoluminescence spectra of PPy, GQDs, and PPy-GQDs composite are shown in Figure 5. The excitation wavelength (λ ex ) is 400 nm for GQDs, and the PL peak position of GQDs is found at 462 nm. The PL peak of PPy is at 467 nm, and PGC composites are 523 and 535 nm for PGC2 and PGC4 composites with the same excitation wavelength, respectively, as shown in Figure 5a. The redshift behavior of the composite was attributed to the strong orbital interaction between the π-conjugated PPy and GQDs [29]. The electron transfer that takes place from polymer to GQDs may appear narrow bandgap, causing the tuning of PL by varying the concentration of GQDs shown in Figure 5b. As increasing the concentration of GQDs, the PL peak moves towards the longer wavelength side from 523 to 535 nm, as shown in Figure 5a, mainly due to the functionalization of GQDs in the PPy matrix. Pure GQDs exhibit exciton-dependent emission with a change in the excitation wavelength from 200 to 600 nm gradually increasing the emission peak intensity [17,30], as shown in Figure 5b. The exciton-dependent PL shift attributed to the inhomogeneities of the energy level originates from the size of GQDs.
The Cyclic Voltammetry Study
The electrochemical behavior of PPy, GQDs, and the PGC composite was studied by CV measurement shown in Figure 6a-d. In a three-electrode system, platinum was used as a counter electrode, Ag/AgCl as a reference electrode, and PGC composite as a working electrode with 1 M NaCl solution as the electrolyte. A working electrode was prepared by coating the prepared materials on ITO glass (with a 1 cm 2 area) and drying at 60 • C for 1 h.
The Cyclic Voltammetry Study
The electrochemical behavior of PPy, GQDs, and the PGC composite was studied by CV measurement shown in Figure 6a-d. In a three-electrode system, platinum was used as a counter electrode, Ag/AgCl as a reference electrode, and PGC composite as a working electrode with 1 M NaCl solution as the electrolyte. A working electrode was prepared by coating the prepared materials on ITO glass (with a 1 cm 2 area) and drying at 60 °C for 1 h. The potential range of −2 to 2 V for PPy, −3 to 3 V for GQDs and PGC2, and −0.8 to 0.8 V for PGC4 composite at different scan rates were used here. As the scan rate increases from 0.1 to 0.5 V/s, the corresponding current increases, suggesting that the current is directly proportional to the square root of the scan rate. The CV curves of the PGC4 composite show a rectangular shape confirms the good electrochemical behavior.
Transmission Electron Microscopy
The HR-TEM image of individual GQDs has a lattice fringe of 0.29 nm (Figure 7a), which corresponds to the [110] basal plane distance of the bulk graphite [31,32]. The average particle size observed in GQDs is 2 to 7 nm. Due to the high pressure in the hydrothermal technique, the carbonization of glucose first occurs, then it nucleates, crystallizes, and grows [33]. In the PGC composite, the pyrrole radical cation is polymerized on the surface of GQDs to form the island-like matrices in the composites, as shown in Figure 7c,d. The HR-TEM image shows an almost homogeneous distribution of GQDs in the polymer matrix, but compared to pure GQDs, the shape of GQDs in the polymer is irregular due to the chemical reaction between the functional group located at the GQDs surface and the polymer matrix [34]. The SAED images of GQDs and PGC4 are shown in inset Figure 7a,d, representing a well-defined diffraction pattern that confirmed the crystalline structure of GQDs. However, there was no such diffraction pattern for the composite due to the amorphous nature of PPy, which correlates with the XRD result. rates 0.1 to 0.5 V/s. The inset is the CV curves of composites at 20 segments at a 0.1 scan rate.
The potential range of −2 to 2 V for PPy, −3 to 3 V for GQDs and PGC2, and −0.8 to 0.8 V for PGC4 composite at different scan rates were used here. As the scan rate increases from 0.1 to 0.5 V/s, the corresponding current increases, suggesting that the current is directly proportional to the square root of the scan rate. The CV curves of the PGC4 composite show a rectangular shape confirms the good electrochemical behavior.
Transmission Electron Microscopy
The HR-TEM image of individual GQDs has a lattice fringe of 0.29 nm (Figure 7a), which corresponds to the [110] basal plane distance of the bulk graphite [31,32]. The average particle size observed in GQDs is 2 to 7 nm. Due to the high pressure in the hydrothermal technique, the carbonization of glucose first occurs, then it nucleates, crystallizes, and grows [33]. In the PGC composite, the pyrrole radical cation is polymerized on the surface of GQDs to form the island-like matrices in the composites, as shown in Figure 7c,d. The HR-TEM image shows an almost homogeneous distribution of GQDs in the polymer matrix, but compared to pure GQDs, the shape of GQDs in the polymer is irregular due to the chemical reaction between the functional group located at the GQDs surface and the polymer matrix [34]. The SAED images of GQDs and PGC4 are shown in inset Figure 7a,d, representing a well-defined diffraction pattern that confirmed the crystalline structure of GQDs. However, there was no such diffraction pattern for the composite due to the amorphous nature of PPy, which correlates with the XRD result.
Photoresponse of PPy-GQDs Composite Device
The CV measurement on UV photodetectors under the illumination of 265 and 355 nm UV light at a power of 8 W and the dark region characterized the deviceʹs performance. The holes located at GQDs are shifted to the ITO electrode, enhancing photodetector performance. The attachment with polymer with the GQDs layer might facilitate the enhancement in the photoresponse of PDs [17]. In most cases, the photocurrent decreased slightly
Photoresponse of PPy-GQDs Composite Device
The CV measurement on UV photodetectors under the illumination of 265 and 355 nm UV light at a power of 8 W and the dark region characterized the device'ss performance. The holes located at GQDs are shifted to the ITO electrode, enhancing photodetector performance. The attachment with polymer with the GQDs layer might facilitate the enhancement in the photoresponse of PDs [17]. In most cases, the photocurrent decreased slightly compared to the dark current due to the absorption of oxygen and water molecules on the surface of the graphene quantum dots in the air. When the device is enclosed in the dark, absorbed oxygen and water molecule are ionized by capturing the free electron from GQDs due to their strong electronegativity. The current was measured as a function of voltage swept from −4 to +4 V. Under the same voltage and scan rate, PPy-GQDs produce a higher hysteresis sign of more energy storage capacity [35]. The area calculates the stored energy under the hysteresis loop, which is calculated to be 8.2 nA and 2.3 × 10 −5 A for PGC2 and PGC4, respectively, in the scan rage ±4 V. It is evident that more than one order of magnitude energy stored in PGC4 compared to PGC2. Figure 8f under 355 nm UV light. As the scan rate increases from 0.1 to 0.5 V/s, the corresponding current rate was found to increase, which means that the current is directly proportional to the square root of the scan rate [14].
surface of the graphene quantum dots in the air. When the device is enclosed in the dark, absorbed oxygen and water molecule are ionized by capturing the free electron from GQDs due to their strong electronegativity. The current was measured as a function of voltage swept from −4 to +4 V. Under the same voltage and scan rate, PPy-GQDs produce a higher hysteresis sign of more energy storage capacity [35]. The area calculates the stored energy under the hysteresis loop, which is calculated to be 8.2 nA and 2.3 × 10 −5 A for PGC2 and PGC4, respectively, in the scan rage ±4 V. It is evident that more than one order of magnitude energy stored in PGC4 compared to PGC2.
Typical CV curves of the PGC2 composite devices were measured in the Figure Figure 8f under 355 nm UV light. As the scan rate increases from 0.1 to 0.5 V/s, the corresponding current rate was found to increase, which means that the current is directly proportional to the square root of the scan rate [14]. Typically, the influence of the temperature on photodetection varies as the square of temperature, as reported in Ref. [36]. Thus, the change in temperature affects the photodetector more in photovoltaic mode, than in the photoconductive mode of operation. In general, in the photo-conductive mode of operation, the dark current may approximately double for every 10 • C increase in temperature. As the temperature continuously rises, after 250 • C the photocurrent of a device suffers degradation.
Photocurrent Measurements
GQDs under illumination with UV light generated electron-hole pairs. The functionality of the photodetector was investigated by plotting the I-V curve to determine the photocurrent, Here I dark & I light are the dark current measured and the current under light illumination. The spectral response of the photodetectors was investigated under the illumination of an 8 W light UV source with wavelengths of 265 and 355 nm by Equation (2) R = I ligh t − I dark /(P inc ) (2) where P inc is the incident illumination power in the effective area [37]. We have compared the photoresponse of GQDs, PGC2, and PGC4 under 265 nm and 355 nm illumination, as shown in Figure 9a, temperature, as reported in Ref. [36]. Thus, the change in temperature affects the photodetector more in photovoltaic mode, than in the photoconductive mode of operation. In general, in the photo-conductive mode of operation, the dark current may approximately double for every 10 °C increase in temperature. As the temperature continuously rises, after 250 °C the photocurrent of a device suffers degradation.
Photocurrent Measurements
GQDs under illumination with UV light generated electron-hole pairs. The functionality of the photodetector was investigated by plotting the I-V curve to determine the photocurrent, Here Idark & Ilight are the dark current measured and the current under light illumination.
The spectral response of the photodetectors was investigated under the illumination of an 8 W light UV source with wavelengths of 265 and 355 nm by Equation (2) where Pinc is the incident illumination power in the effective area [37]. We have compared the photoresponse of GQDs, PGC2, and PGC4 under 265 nm and 355 nm illumination, as shown in Figure 9a, An essential parameter for the photodetection of PGC2 and PGC4 is the response time, analyzed in Figure 10. Both composites' photoresponse enhanced the response time and responsivity of the PPy-GQDs composite, ascribed to the improved interconnection between GQDs and PPy. The UV light sources with 265 nm and 355 nm were illuminated. The photocurrent of both the composite measured at a different wavelength, but for superior comparison responsivity of PGC4 is 2.33 µA/W at 355 nm illumination is a higher order of magnitude than 265 nm illumination is mentioned in Table 1.
Mott Schottky Plot for PGC2 and PGC4 Composites
The Mott-Schottky plots are performed to study the electronic band gap and the flat band potential of the PGC2 and PGC4 composite. The plot of Figure 11a shows the semiconductor's flat-band potential (EFB), which is necessarily applied to the SEM.
The PGC2 and PGC4, EFB values are −1.58 (Figure 11a) and −1.23 V, (Figure 11b) and the charge carrier's transfer interface is shown in the M-S Plot.
Mott Schottky Plot for PGC2 and PGC4 Composites
The Mott-Schottky plots are performed to study the electronic band gap and the flat band potential of the PGC2 and PGC4 composite. The plot of Figure 11a shows the semiconductor's flat-band potential (EFB), which is necessarily applied to the SEM.
Energy Level Diagram of PGC Composite Devices
The working mechanism of the PGC composite can be understood with the help of an energy band diagram. Figure 12 shows a schematic diagram of the energy band mech-
Energy Level Diagram of PGC Composite Devices
The working mechanism of the PGC composite can be understood with the help of an energy band diagram. Figure 12 shows a schematic diagram of the energy band mechanism of the electrons and holes during UV illumination for the device. When illumination of UV light is used, the PDs through the ITO electrode, the photons penetrate the PPy-GQDs layer (active layer) exhibit the exciton creation. When photogenerated exciton diffused into both polymer and quantum dots composite while electron move to Ag layer resulting generation of photocurrent [39,40].
Energy Level Diagram of PGC Composite Devices
The working mechanism of the PGC composite can be understood with the help of an energy band diagram. Figure 12 shows a schematic diagram of the energy band mechanism of the electrons and holes during UV illumination for the device. When illumination of UV light is used, the PDs through the ITO electrode, the photons penetrate the PPy-GQDs layer (active layer) exhibit the exciton creation. When photogenerated exciton diffused into both polymer and quantum dots composite while electron move to Ag layer resulting generation of photocurrent [39,40]. The energy level in E HOMO is 6.13 eV, E LUMO is 2.69 eV, and the electrochemical bandgap is 3.44 eV. Similarly, the electrochemical bandgap (E g(el) ) obtained for PGC2 and PGC4 is shown in Table 2. Bredas et al. [41] reported the relation, onset of oxidation and reductions with the ionization potential and electron affinity values, [E onset] ox = IP − 4.4 (6) [E onset] red = EA − 4.4 E g = IP − EA The polymer attains a high electrical conductivity, low ionization potential, and a larger electron affinity. Graphene quantum dots have a strong π-conjugated bond showing large orbital interaction between pyrrole monomer, and the π-conjugated system elevates primary HOMO to a higher energy orbit [42]. The bandgap of PGC4 is smaller than that of PGC2, and the device exhibits good optical properties and great potential of composite, demonstrating high performance and a low-cost UV PDs device. We can remark that any kind of degradation, with respect to photoresponsivity measurements, was observed in the PPy-GQDs composite samples stored in a vacuum-sealed desiccator at room temperature for 3 years. Table 3 demonstrates the improved photoresponse of PGC2 and PGC4 compared to other results due to the enhanced interconnection of GQD by the island-like polymer matrices, which facilitates carrier transport within the polymer matrices. The photocurrent switching phenomenon in GQD and PPy-GQDs devices may open up novel applications in optoelectronics.
Conclusions
In summary, highly luminescent GQDs were synthesized by the hydrothermal method. A chemical oxidation polymerization technique is used to synthesize PPy-GQDs composites. The FT-IR results specified the chemical interaction and the crystal fringes in the TEM image confirmed the homogeneous dispersion of GQDs in the polypyrrole surface. The PL confirmed that the redshift of the composite is ascribed to π-conjugation interaction b/w PPy and GQDs. The novel photodetector exhibits very high responsivity 2.33 µA/W in PGC4 composite for 355 nm UV light. The improved responsivity compared to PGC2 (1.93 µA/W) resulted in the reduction of the carrier transportation barrier, giving rise to excellent stability and reproducibility, fast response speed, and highly durable device expanding great opportunities by using PPy-GQDs composites in high-performance, lowcost UV photodetectors. Further work will continue to establish the stable monolayer of active material in the hybrid photodetector device for different wavelengths. | 8,391.6 | 2022-09-01T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Full-Diversity QO-STBC Technique for Large-Antenna MIMO Systems †
The need to achieve high data rates in modern telecommunication systems, such as 5G standard, motivates the study and development of large antenna and multiple-input multiple-output (MIMO) systems. This study introduces a large antenna-order design of MIMO quasi-orthogonal space-time block code (QO-STBC) system that achieves better signal-to-noise ratio (SNR) and bit-error ratio (BER) performances than the conventional QO-STBCs with the potential for massive MIMO (mMIMO) configurations. Although some earlier MIMO standards were built on orthogonal space-time block codes (O-STBCs), which are limited to two transmit antennas and data rates, the need for higher data rates motivates the exploration of higher antenna configurations using different QO-STBC schemes. The standard QO-STBC offers a higher number of antennas than the O-STBC with the full spatial rate. Unfortunately, also, the standard QO-STBCs are not able to achieve full diversity due to self-interference within their detection matrices; this diminishes the BER performance of the QO-STBC scheme. The detection also involves nonlinear processing, which further complicates the system. To solve these problems, we propose a linear processing design technique (which eliminates the system complexity) for constructing interference-free QO-STBCs and that also achieves full diversity using Hadamard modal matrices with the potential for mMIMO design. Since the modal matrices that orthogonalize QO-STBC are not sparse, our proposal also supports O-STBCs with a well-behaved peak-to-average power ratio (PAPR) and better BER. The results of the proposed QO-STBC outperform other full diversity techniques including Givens-rotation and the eigenvalue decomposition (EVD) techniques by 15 dB for both MIMO and multiple-input single-output (MISO) antenna configurations at 10−3 BER. The proposed interference-free QO-STBC is also implemented for 16× NR and 32× NR MIMO systems, where NR ≤ 2. We demonstrate 8, 16 and 32 transmit antenna-enabled MIMO systems with the potential for mMIMO design applications with attractive BER and PAPR performance characteristics.
Introduction
The need for higher data rates at the user end is the major motivation for new multiple-input multiple output (MIMO) schemes in modern communication systems.These modern techniques dispensing with the large number of antennas also enable spectral efficiency and increased transmit-energy efficiency, although all antennas do not contribute equally [1][2][3][4].This is laudable in the study of massive MIMO (mMIMO) systems that are being pursued by researchers and industrialists alike for coping with the growing demand for higher data rates in modern telecommunication services.In the 5G standard, for example, the mmWavebands have been selected due to the abundance of unused spectrum resources [5].However, while the high data rate problem can be overcome easily by deploying large bandwidths, the scarcity of the electromagnetic spectrum subtends some efficiency limitations in using large bandwidths to satisfy the high data rate demand.One of the ways of realizing such data rates (on which the mMIMO can rely), for example in wireless communication systems, is by enabling higher antenna configurations or by optimizing the available/known configuration techniques.In this study, we explore the methods of both optimizing the present MIMO design methods and exploring higher order antenna configurations with potentials for mMIMO.
Space-time block coding (STBC) [6,7], for example, is a MIMO technique that exploits time and antenna dimensions to achieve high data rates with minimum error probability.In [8], it was shown that under similar spectral efficiencies, STBCs outperform spatial modulation in terms of bit error ratio (BER) metrics.STBCs can be combined with beamforming to minimize the error probability of MIMO systems [9][10][11] and presently studied for systems supporting mMIMO schemes [3].Other methods include the use of large antennas at the transmitting base stations [3].
Although STBCs combined with beamforming are good hybrids when minimum BER is desired, the conventional orthogonal STBC (STBC) [6] is limited to only two transmit antennas (N T = 2) as higher order antenna configurations do not achieve orthogonality [12].These limitations are overcome by specially combining the O-STBCs to increase the spatial diversity capability of the scheme [13].Such codes are referred to as QO-STBCs [11,14].The standard QO-STBC scheme provides N T > 2 over similar spectral conditions as the O-STBC with better performance and also dispenses with the full spatial rate, but not full diversity.Unfortunately, also, QO-STBC complicates the receiver design due to the lack of orthogonality among the codes.Such a limitation also leads to ISI in the decoding matrix of the QO-STBC receiver and diminishes the BER performance.
In terms of the detection matrix, these off-diagonal (ISI) terms are described also as self-interference terms [15].Usually, it is difficult to decouple transmitted symbols using linear processing at the receiver of a standard QO-STBC system.Consequently, several solutions have been offered by researchers to eliminate the ISI, namely using Givens-rotation [16], eigenvalue decomposition (EVD) [17,18] and Hadamard matrices [1,17].Although both the Givens-rotation technique and the EVDs approach yielded similar results [19], the EVD method is less complex to implement.The Hadamard matrices are equivalent modal matrices of the EVD with non-zero entries to enhance full-diversity realization of the ISI-free QO-STBCs.In [17], the authors proposed a QO-STBC code structure of with no off-diagonal terms in its detection matrix.Unfortunately, however, the output ISI-free matrix is complex, and it will be demonstrated later in this study to have a poor BER performance (compared to the Givens-rotations and EVD methods).This is due to the degradation of the true gain by the power of the ISI terms (removed from the rest off-diagonal points), which are greater than the ISIs of the Givens-rotation and EVD methods.
Initially, this present work was first introduced in [1] for multiple input single output (MISO) systems; we extend our results to include large-antenna (N T > 4), MIMO, receivers up to N R = 2 receiving antennas and spectrally-efficient modulation schemes (e.g., 16 quadrature amplitude modulation (QAM) and 128 QAM).Large antenna systems provide three advantages, namely: the effect of small-scale fading is averaged out; the random channel between N T and N R become pairwise orthogonal as the elements grow; and lastly, it allows for transmit power efficiency in massive MIMO [20].We apply modal matrices from the eigenvalues of the QO-STBCs provided by the Hadamard matrices to orthogonalize the detection matrix and enable linear processing.This is achieved by deriving an equivalent virtual channel matrix (EVCM) first, which can be used to reduce the complexity of decoupling the space-time transmitted messages at the receiver.With the EVCM approach, the design and study of QO-STBC become attractive since there exist only the estimates of the originally N T -transmitted messages received at the receiver.Using the EVCM approach also, the receiver complexity is thus transferred to the transmitters such as the base stations, which have the flexibility of supporting very-large/mMIMO antennas (as in [21]) and also complex algorithms better than the receivers [14], such as mobile phones.This is attractive for massive MIMO as linear processing does not require the complex detection process required as well in dirty paper coding [22].In mMIMO, the capacities when N T can be verified using a left-truncated Gaussian distribution [23].Furthermore, given that the conventional STBC has found applications in multi-directional MIMO designs [9], the proposed QO-STBC can also impact mMIMO multi-directional QO-STBCs being explored in [24,25].Our results, in future studies, can enhance the performance of large antenna wireless sensor networks (WSNs) design [20] in mMIMO systems since the total power consumption decays by 1/N T as N T becomes very large [26], satisfying the power efficiency criteria of large antennas [20].In addition, the linear process of our proposed technique will be useful for low-complexity implementations at the decision fusion centres (DFCs) over inhomogeneous large-scale fading between the sensors and the DFC as in [27], although, massive MIMO trade antennas at the FDCs for energy efficiency at the sensors of WSNs [28].
QO-STBCs with non-sparse matrices enable a well-performing peak-to-average power ratio (PAPR) [29,30].Thus, since the modal matrices of our system do not have zero entries, then we present among other properties a QO-STBC design scheme with well-performing PAPR.In addition, our system exhibits full diversity, increased SNR performance that minimizes the BER and supports linear decoding.The standard QO-STBC is combined with the modal matrices of the Hadamard matrices motivated by EVD to construct new QO-STBC with no ISI and achieves full diversity.Furthermore, we have also shown in the literature that the true gain is significantly reduced by the eliminated ISI terms for N R > 3 receiving antennas in [8] and also that realistic receivers may not support more than N R = 2 without severe mutual coupling degradation.
In Section 2, the system model is described for specific QO-STBC characteristics.An introduction to full-diversity QO-STBC including the proposed full-diversity QO-STBC is presented in Section 3. We presented the pairwise error probability in Section 4 and our simulation results in Section 5 with the conclusions following in Section 6.
System Model
Given a standard STBC code with a full rate (R s = 1) (e.g., [6]), the ratio of the space (number of antennas) and time (number of timeslots) can be expressed as R s = N T /T = 1.Then, for an orthogonal-STBC (O-STBC) system (e.g., [6]) with two transmit antennas (N T = 2) and one receiver (N R = 1), the received signal at the receiver can be represented as: where T represent the additive white Gaussian noise (AWGN) and h 1 and h 2 represent the channel coefficients from Rayleigh fading and N R = 1 in the above example.Note that [•] T represents the transpose of [•], and (•) * represents the complex conjugate.
Although the STBC code described in [6] achieves full rate criteria and full diversity, its major disadvantage is that the design does not support N T > 2. This problem can be solved by deploying QO-STBC, which can be formed from the STBCs.The QO-STBC can dispense with N T > 2 and complex entries.It achieves full spatial rate [12,13,31], but it does not attain full diversity; QO-STBCs exhibit full spatial rate (R s = 1) when, for example N T = T.Meanwhile, consider a QO-STBC code with N T = T = 4 as follows [18,31]: where and follow the standard Alamouti STBC of [6].
Unfortunately, (2) does not satisfy the This property has also motivated the proposal for the QO-STBC design discussed in [32].
The QO-STBC signal, S, can be a phase-shift keying (PSK) or quadrature amplitude modulation (QAM) modulated signal, b ∈ C 1×N , of length N. Unlike the case of N T = 2, where there are {h i } N T =2 i=1 , the QO-STBC (e.g., (2)) involves N T > 2 antenna spaces.Assuming that there are {h i } N T =4 i=1 antenna spaces over which the QO-STBC symbols of (2) can be transmitted at different timeslots with one receiver (N R = 1), then combining the QO-STBC of (2) with the channel h = h 1 h 2 h 3 h 4 T , the receiver obtains: The result in (3) follows from combining (2) and the channel vector h = h 1 h 2 h 3 h 4 T so that the received symbols can be expressed as: where z ∈ C N T ×1 .The design in (3) complicates the receiver since the received signals cannot be linearly processed without difficulty.For instance, it is difficult to decouple the transmitted messages at the receiver using linear processing.Thus, an EVCM is derived to enable the linear processing, simplifying the decoding of only s = {s i } N T i=1 and also the decoupling of received symbols into the estimates of s (namely ŝ).As an example, computing the conjugates of the second and fourth rows of (3) and rearranging the results, A major advantage of the (5) architecture is that if then it is therefore impossible for an eavesdropper to compromise s over a time varying condition, hence making the scheme secure.The result realized in (5) enables that the channel H given in (1) can be expressed as: where (6) represents the EVCM, H v .In the literature, an EVCM can be described as a matrix with ones on its leading diagonal and at least N 2 /2 zeros at its off-diagonal positions and its remaining (self-interference) entries being bounded in magnitude by 1 [33].Representatively, where D is a sparse matrix.To reduce the system complexity, we apply the EVCM, which simplifies decoding at the receiver.If there exists an optimum detector of a maximal ratio combining (MRC) output, namely using zero-forcing (ZF), For instance, let the received signal estimate be: where (•) H is the conjugate transpose of (•).It can be verified that D 4 is the detection matrix that implements a QO-STBC systems with N T = 4 and N R = 1.In relation to (7), we define: where The mutual interference terms outside the leading diagonal can be expressed as . Of course, the interference term diminishes the performance of this style of QO-STBC, for example the signal-to-noise ratio (SNR) and consequently the BER.Our interest is to minimize the impact of β so that the SNR can be maximized and then the BER minimized.An example is in constructing a suitable channel matrix whose decoding matrix is devoid of the ISI of (9).
Full-Diversity QO-STBC Using EVD and the Proposed
In [34], a zero-forcing detection was discussed for the QO-STBC design; this is similar to the eigenvalue method proposed in [18].Since the matrices that orthogonalize the detection of symbols are non-singular, the received noise estimate is non-Gaussian.Similarly, also, the pre-whitening process of the noise further amplifies the noise, so that the BER statistics are impacted to reduction.In [32], the author explored the method of the analytical derivation of the closed-form expression of the pairwise error probability (PEP).The models described in [32,34] sacrifice the data rates and would require switching off the first two antennas or the last two antennas at the RF-chain during each timeslot; this can be expensive.
Meanwhile, QO-STBCs that exhibit no-ISI in the detection matrices are said to achieve full diversity.For example, the ISI-free QO-STBC is achieved through the rotation of one-half of the symbol constellation set [35][36][37], multidimensional rotation [38][39][40], Givens-rotations [16], EVD [17,18] and Hadamard matrices [1,17].Although the EVD approach is less complex and will be followed, the results can be enhanced if an equivalent modal matrix can be derived without zeros terms.
Definition 1. [1]:
If A = a i,j is a square matrix and x is a column matrix (x i ), let Ax = v i x, where v is a scalar, then v i is an eigenvalue and x i an eigenvector.The vector x i can be formed into a square matrix usually called a modal matrix.If the eigenvalue of A is the leading diagonal of a matrix V , then V = v i I; both A and V share the same eigenvalues; I is an identity matrix.It follows that AM = MV .
Here, we use Definition 1 to demonstrate our proposal using a handy N T = 4 and show also that this can easily be extended to other higher antenna configurations, namely N T > 4. Substituting for A using D in Definition 1, it can be observed that: We formulate the modal matrices depending on the number of transmitting antennas to eliminate the interfering terms in the detection matrix; this results in different modal matrix sizes.By applying (10), namely M −1 DM = V to (9), the QO-STBC scheme can attain full diversity; this is the principle of diagonalizing a matrix [41].The matrix V therefore achieves the required interference-free detection.By (9), the resulting modal matrix of the QO-STBC system under study with N T = 4 and T = 4 can be expressed as: A new EVCM can be formed by post-multiplying H v by M H v , such as: Note that if the channel is defined as ( 12), the linear model will be expressed as (1).On the other hand, if the system has channel coefficients given by h = {h i } N T i=1 , then the system can be described (in linear form) as (4).Definition 2. (see Theorem 5.5.1 of [12]): A T × n complex generalized linear processing orthogonal design O c in variables 0, ±c 1 , ±c * 1 , ±c 2 , ±c * 2 • • • , ±c n , ±c * n exists if and only if there exists a complex generalized linear processing orthogonal design G c in the same variables and of the same size, such that Notably, only the Alamouti STBC achieves this condition without other post-(or pre-) processing.Now, rewrite (4) in the following form, x = Hs + z then the receiver receives: From ( 13), the encoding matrix S of (2) simplifies to s = {s i } N T i=1 only.On the other hand, the term H H H in (13) also permits linear decoding and eliminates the off-diagonal β interfering terms, such as: Observe that H H H provides: as the new detection matrix with no ISI.Furthermore, observe that the eliminated ISI impacts the true power gain.For a large number of antenna configurations, some antenna branches contribute more than others [2].In (14) for example, the energy of the last two antenna branches are reduced by the eliminated off-diagonal ISI terms so that the resulting gains are more on the first two antenna branches.This can be useful with RF-chain switching and also when using directional communications to concentrate power including the antenna selection technique.
For the 4 × 1 configuration, {h i } N T =4 i=1 , while for the 3 × 1 configuration {h i } N T =3 i=1 but within the QO-STBC design.The 3 × 1 configuration is achieved by setting h 4 = 0; for example, using the method of (12), it is possible to construct an EVCM suitable for N T = 3 with N R = 1 such as: On the other hand, formulating the equivalent symbol matrix involves eliminating the fourth column of the matrix [16] since only three antenna spaces are required, for example: With a receiver dispensing with a maximum likelihood (ML) detection, the receiver finds ŝ = {s i } N T i=1 signals that have the closest Euclidean distance nearest to the original transmitted QO-STBC signals as follows ŝ1 , • • • , ŝN T .In this case, the error matrix can be expressed as We assume that the channel is quasi-static for N T consecutive timeslots.
Combined Standard QO-STBC and Hadamard Matrices for QO-STBC Design
Although one can easily verify that , the limitations of (11) include poor PAPR performance due to the sparsity of the EVD modal matrix [30] and poor BER resulting from the zero terms [8].Since QO-STBC matrices can be diagonalized using modal matrices (M), then Hadamard matrices can also be used to diagonalize QO-STBC systems.For an n × n matrix, Hadamard matrices have ±1 entries with the columns (and rows) being pairwise orthogonal [42,43], for example: where I n is an identity matrix.Considering the system example under study, the Hadamard matrix of 4 × 4 order can be expressed as: where: It can be observed in ( 17) that there exist no zero (0) entries as there are in (11).These zero entries limit the BER performance as they null-out the channel gains.From (16), it can be observed that use of the Hadamard matrix as the modal matrix gives the advantage of the N T multiple of the diagonalized matrix.
In Section 3, we discussed that modal matrices are applied to QO-STBC systems in order to eliminate the off-diagonal (ISI) terms.This phenomenon also led to the proposal of applying Hadamard matrices to ensure that QO-STBC systems attain full diversity by eliminating the off-diagonal terms.Since the 0's null-out the channel gains, the modal matrix in (11) diminishes the SNR and consequently worsens the BER performance of the QO-STBC systems.For instance, the channel gains are eliminated when combined with a zero.Second, the presence of these zeros leads to poorer PAPR performance (see [29,30] and the references therein).The modal matrices subtended by the Hadamard matrices do not have these limitations, consequently QO-STBC codes constructed from it would exhibit better BER and better PAPR advantages.Meanwhile, our interest in this study is in minimizing the error probability (BER).Thus, we combine ( 6) and ( 17) so that the channel matrix can be expressed as: At the receiver, linear processing can be applied as follows: where: The result in ( 18) can be discussed in terms of the advantages it provides.As an example, it eliminates the nonlinear decoding that existed in standard QO-STBC.Additionally, comparing (19) with (14), using the proposed modal matrix technique improves the gain by N T -times the power gain.Consequently, the received SNR is thus improved by N T -times.With N T = 3, the channel term namely h 4 is set to zero (0) [16,17].As an example, we express: If s = {s i } N T i=1 , where N T = 4 were sent in (18), then s = {s i } N T i=1 where N T = 3 are required in the case of N T = 3.Thus, the fourth column of ( 20) is ignored so that the EVCM for N T = 3 becomes: This phenomenon (as in (21)) can be extended to designing QO-STBC systems with N T = 5, 6, 7, 9, 10, 11, etc. for higher order antenna configurations.
In terms of complexity in comparison to the EVD method, the number of terms is exactly the same except that when the standard QO-STBC matrix terms are multiplied by the null terms from the sparse eigenvalues of the EVD matrix, it nulls-out the channel gains so that the resulting EVCM matrix is reduced in the number of terms; this is pronounced in the analysis results discussed in Section 3 of this paper (see (12)).
Theorem 1.The standard QO-STBCs can achieve full diversity if the detection matrix exhibits no off-diagonal terms and its modal matrix has non-zero entries.
In [30], it was shown that full-diversity Toeplitz STBC codes exhibit well-reduced PAPR if the codes have non-zero entries.Meanwhile, the PAPR can be calculated as: where x is the time domain orthogonal frequency division multiplexing (OFDM) symbol vector of x with length K. Since the scheme involves multiple N T transmit branches, the OFDM driver is performed along each of the transmit branches, and the PAPR is measured using the complementary cumulative distribution function (CCDF), namely CCDF = 1 − Cs, where Pr {•} and x 0 are the probability of {•} and the target symbol amplitude threshold, respectively.The indicative PAPR is therefore an average of the PAPRs over each transmitting branch.
Corollary 1.As a corollary of Theorem 1, it can be established that modal matrices with no zero entries yield better PAPR performing QO-STBCs.
Similar to the foregoing discussion, when the antenna configuration is increased to N T = 8, the method of realizing (6) can be used.However, the process can be simplified by formulating two EVCMs from h = {h i } N T =8 i=1 as follows; define the EVCM for antenna Indices 5 to 8 as: Then, combining (23) and (6) in the regime of (2) and then multiplying by the necessary modal matrix, Using the method that subtends (24), other higher antenna configurations (namely, N T > 8) can be explored.For other base stations equipped with 4 > N T < 8, the process that subtended (21) can be used.
Diagonalized Hadamard STBC
Other methods of constructing new codes from the standard QO-STBC have been reported [17,30].The method described in [30] does not adopt the use of the Hadamard matrix and does not achieve the full rate.However, [17] combined cyclic matrices with Hadamard matrices to form new codes.The cyclic matrix does not achieve orthogonality, hence its combination with the Hadamard matrix.In [17], the authors introduced a new QO-STBC design from cyclic matrices called diagonalized Hadamard STBC (DHSTBC).For instance, the DHSTBC can be expressed as [17]: Given the knowledge of modal matrices proposed in this study, the equivalent symbol matrix is discussed.As the modal matrix of D = H H H from M −1 H v DM H v = V was used to form an EVCM in (12), similarly from M −1 s D s M s = V , the equivalent symbol matrix can be discussed knowing that M s is the modal matrix of D s .Considering (25), the equivalent symbol matrix can be derived as S new = S c × M s ; this is realized by combining a cyclic matrix of ( 25) and a Hadamard matrix to obtain the DHSTBC code, which was defined as [17]: Recall a system model of (4).Similar to the (4) model, if the symbols matrix is defined from the cyclic matrix of ( 25), then the channel matrix can also be expressed as: Then, constructing an EVCM for linear decoding involves combining the EVCM and the Hadamard-based modal matrix (17), as: Similar to (18), the receiver receives: where s = {s i } N T i=1 .The detection matrix is fat in terms of elements, for example: where: where N T = 4. Furthermore, if H 4 is formed as where n = 4) as the sequel to the Hadamard criteria.The resulting matrix is huge and complex; these have their respective implications that will be enumerated shortly.For instance, since there are additional interfering terms in (27) after expanding a i b i ∀i = 1, • • • , N T , then, when compared to the results of the ISI-free QO-STBC in (37), the terms a i b i ∀i = 1, • • • , N T further diminish the BER performance, so that the DHSTBC scheme performs poorly.
Comparing the proposed QO-STBC result (18) with the earlier Hadamard algorithm of DHSTBC in (27), the proposed QO-STBC has well-reduced computational complexity.For instance, expanding a i b j ∀i = 1, • • • , N T , it can be observed that there are 16 terms involved in the earlier DHSTBC, while there are only eight terms involved in the proposed one; there exist β + O (2N T ) ISI terms.In terms of performance, the earlier Hadamard QO-STBC (DHSTBC) involves eight additional interfering terms (apart from β) that would degrade its BER performance.
MIMO QO-STBC
In the earlier discussions, we have supposed that there are N R = 1 receiver antennas; here, we consider the case of N R > 1.Thus, each of the channel terms from the H = {h i } N T i=1 can be treated respectively as a vector of the form: where: If the equivalent channel can be derived, then the MRC when there are N R maximum receiving elements can be described.Assuming perfect channel state information (CSI) (i.e., the channel coefficients are perfectly available at the receiver), the detector attains the optimal maximum likelihood (ML) rule as [44]: where is the Euclidean distance metric for an ML decoding.If an equivalent channel is known (e.g., the EVCM), the maximal ratio combining (MRC) rule from [33,44] provides that: where for each receiver antenna branch.In the case of [1], we only studied the QO-STBC scheme for a multiple-input and single-output (MISO) system; thus, N R = The degree of impact of H H new j on the noise term impacts the Euclidean distance metric at the receiver; this depends on the fading of the channel.The complexity in the decoupling of the transmitted message in the receiver reduces to finding only ŝ = {ŝ i } N T i=1 for all of the receiving branches.STBCs that support linear transceiver systems incur a loss in capacity over channels with multiple receive antennas [45].This is even more noticeable in the case of conventional QO-STBCs due to ISI and worst when DHSTBC is used to enable transmitter diversity because the ISI terms (β) will grow as the N R increases, in fact up to the point of no more diversity gain.
Pairwise Error Probability of the QO-STBCs
Usually, the channel is considered quasi-static throughout each symbol block so that the Chernoff bound is averaged over a Rayleigh fading channel as [9]: where P (s → ŝ | H) is the pairwise error probability (PEP), which responds to the received SNR and E H {•} is the expectation value over each symbol block.The conditional PEP, for a given channel say H, is described using the well-discussed Chernoff bound of the form: where N 0 is from the circularly-symmetric additive white Gaussian noise with zero mean and variance σ 2 Z = N 0 2 ; this is the case when E s = 1.Indeed, the Gaussian Q-function is the complementary error function expressed as: In terms of (32), the conditional PEP is summarized as: where γ x = H (s − ŝ) 2 / (4N 0 ) is the SNR at the maximal ratio combining (MRC) receiver output.The performance bound then follows as [9]: where g QAM = 3/2(M − 1) and λ N T is from the detection matrix.Meanwhile, from the Cauchy-Schwartz inequality, Furthermore, define B as an m × n matrix, then its Frobenius norm . Rewrite H (s − ŝ) as H s , such that: where H 2 2 = H 2 F [46].If s = (s − ŝ) estimates the error detection metrics, then Furthermore, H 2 F = tr H HH .The likelihood of erroneously decoding the transmitted signals can be used to discuss the diversity product of the scheme [47].However, for any ISI-free QO-STBC, HH H = σ 2 H I N T where N T > 2 and σ 2 H is the gain power.
The SNR Performance of EVD and DHSTBC
The conventional O-STBC achieves full diversity, and there exists only N T = 2.For N T = 4, one can express the SNR at the receiver of the ISI-free QO-STBC ( 14) from EVD as: where For an ISI-free QO-STBC, although the results in ( 35) and ( 27) are similar, the impacts of the channel matrix are different.When the detection matrix is a diagonal matrix, for instance, rank H H H = 2 when N T = 2, while rank H H H = 4 when N T = 4, and so on, the Euclidean distance metrics are also different both for different QAM constellations and different N T .Now, the probability that ŝ = s was detected can be expressed as: where F being the Euclidean distance metric at the receiver.Sometimes, the Chernoff bound of the Gaussian function can be used to approximate the Q-function, such as in [8].The method of DHSTBC does not perform any better.For example, the ISI is greater in the DHSTBC (see (27)) than using either the EVD (14) or the proposed technique (19).The effects of the ISI on diminishing the true-power gain of the DHSTBC will be reduced as evidently shown in the BER results discussed in Section 5.
Although one can easily verify that H
, the limitations of (11) include poor PAPR performance due to the sparsity of the modal matrix [30] and poor BER resulting from the zero terms of M H v [1] because the SNR and the BER performance depend on the power gain contributed by H v .The proposed modal matrix is M H d and the proposed channel matrix is H new .Thus, the SNR at the receiver can be described as: One can also verify that: Equation ( 37) provides the SNR statistics at the MRC output of the receiver and provides information of the BER performance of the EVD ISI-free QO-STBC from the Hadamard modal matrix.Notice that with an extra factor, N T impacting the power gain, which will further minimize the BER statistics; similarly, M −1 Consequently, the SNR can be well described as: Since H new H H new provides: then (38) can be rewritten as: where M H d is a 4 × 4 Hadamard matrix when N T = 4. Comparing (38) and (37), it is clear that the power gain in using M H d is N T -times greater than using M H v .The use of M H d thus affects the slope of the BER so that the full-diversity method of the proposed QO-STBC becomes better.In general, the method of constructing N T = 4 antenna configurations described in Section 3.1 can be extended to any higher order design, namely N T = 8, 16, 32, etc.
Remark 1.We refer the reader to our earlier discussion in [1,48] for other designs that do not enable the full rate, but maintain full diversity.
Simulation Results and Discussion
In [1], we have studied only the cases of MISO using QPSK and N T = 4, and here (in this study), we extend the MISO configuration to include N R > 1.For fair comparisons, the simulation environments are similar except for the use of suitable EVCM configurations for different numbers of antenna configurations and code design styles.The symbols we have used are not coded; in other words, no forward error correction is applied.At the receiver, the optimum detector is assumed so that an MRC combining method is adopted.We do not present the simulation results for N T = 4 and N T = 3 in this work, as these have been addressed in [1].Meanwhile, the Rayleigh fading channel model is used, which is considered to be quasi-static over each symbol block.The model has zero mean with unit variance.
MISO and MIMO QO-STBC Design Using Eight Transmit Antennas
This study implements the standard QO-STBC code system described in Section 2 in the transmitter and an ML detection dispensing with MRC in the receiver to construct a 8 × N R , 16 × N R and 32 × N R MIMO system using 16 and 128 QAM; N R ≤ 2 and these are simulated over the MATLAB environment.In the process, random symbols are generated; this involves 7.5 × 10 4 symbols averaged over each channel block.These are mapped using the aforementioned mapping schemes, demultiplexed and processed over the EVCM channels that enabled N T × N R when N T > 4 and N R ≤ 2 transmit antennas are used, respectively.
Using EVCM simplifies detection to linear processing so that the estimates of the transmitted symbols, ŝ1 , • • • , ŝN T T , are easily decoupled.Since there are N T > 4 transmitting branches, then each branch receives N T messages up to a total of N R N T (where N R = 1 for MISO design) receptions.The receiver finds estimates {S i } N R N T i=1 whose Euclidean distance, | x − Hs | 2 , is closest to the transmitted messages; then, afterwards, M-QAM signal demodulation is performed.The transmitted message (s) and the received message (ŝ) are then compared for the error value as ∆ s ŝ = s − ŝ; the BER is computed, and the results are shown in the following Figures 1 to 6.
In Figure 1, the proposed QO-STBC outperforms the standard and eigenvalue QO-STBC styles.Specifically, at 10 −4 BER, the proposed outperforms eigenvalue QO-STBC by 10 dB and better than the standard QO-STBC by 5 dB.For the MIMO design, namely 8 × 2, the proposed technique outperforms the standard QO-STBC by 6 dB and better than the eigenvalue QO-STBC technique by 9 dB.The degradation is from the eliminated off-diagonal terms that diminishes the true power of the received signal.We extend our investigation to 128 QAM as shown in Figure 2; it is found that the proposed also outperforms both the eigenvalue technique and the standard QO-STBC.
From (18), the gain H H new H new and N T impact the amplitude of the received signal while only H H new impacts the noise.The N T amplifies the amplitude of the received signal such that the power gain is improved (see ( 18)) compared to the eigenvalue interference-free QO-STBC in Figure 2. Furthermore, in Figure 2, this proposed QO-STBC technique translates to a 6-dB gain in comparison to the earlier eigenvalue-based QO-STBC scheme.Significantly, two parts are involved (σ 2 h and β); β is an interference term that degrades the true gain σ 2 h .Any method that can eliminate β would further improve the BER performance.
MISO and MIMO QO-STBC Design Using 16 Transmit Antennas
In (18), it is found that the result of the proposed QO-STBC (in Figure 3) satisfied the Hadamard criteria in (18).For N T = 16 with the N R = 1, 2 QO-STBC scheme, the proposed outperformed the eigenvalue QO-STBC.Clearly, the N T -times amplitude gain of the Hadamard criteria in ( 18) is reflected also in Figure 3, as the proposed QO-STBC consistently outperformed both the standard and eigenvalue-based QO-STBCs by about 10 dB and 13 dB, respectively, at 10 −3 BER.In all cases, the proposed method outperformed all other QO-STBCs.Although the symbols transmitted over antenna spaces are typically unique, however, the EVCM are constructed with respect to Section 2 of this study.In the receiver, AWGN terms, Z, are constructed and added to each receiver antenna branch.Since there are N T = 4 transmitting branches, then each branch receives N T messages up to a total of N R N T = 16 (where N R ≤ 2 for MIMO design) receptions.Again, using the EVCM simplifies the detection of the transmitted symbol for a linear processing so that the estimates of the transmitted symbols, ŝ1 , • • • , ŝN T T , are easily decoupled.The receiver finds estimates of {s i } N R N T i=1 whose Euclidean distance, | x − Hs | 2 , is closest to the transmitted messages; then, 128 QAM signal demodulation is performed in Figure 4.The proposed method clearly outperformed both the standard and eigenvalue approaches.Both techniques show falling BER measures due to some irreducible errors from the "untrue-gain" and the noise-power enhancement.In (27), linear detection was performed; for the QO-STBC discussed in [17], it was shown that the detection matrix ( 27) is huge, complex and contains further degrading elements that limit the improvement from the true gain (σ 2 h ); on the other hand, the QO-STBC method of [48] provided a matrix that precludes these limitations.
The investigation is further extended to a higher modulation scheme, such as 16 QAM; the results are shown in Figure 5.
MISO and MIMO QO-STBC Design Using 32 Transmit Antennas
Finally, we report in Figures 5 and 6 the results for N T = 16, 32 with N R = 1, 2 using 16 and 128 QAM, respectively.In Figure 5, the proposed QO-STBC for the 16 × 1 antenna design at 10 −4 BER outperformed eigenvalue QO-STBC by 15 dB and better than the standard QO-STBC by 8 dB.Consider the design also for the 16 × 2 antenna configurations at 10 −4 BER, the proposed QO-STBC outperformed eigenvalue-based QOSTBC by 15 dB.
Similarly, the proposed Hadamard-based QO-STBC performs better than the standard QOSTBC technique by 10 dB.From the proposed QO-STBC design coupled with the MRC rule in the receiver, it follows that the performance of the MIMO design method using MRC provides improvement to the QO-STBC system design for independently fading channels, thus showing increasing power gain with increasing receivers.By increasing the transmitter diversity and using a higher order and spectrally-efficient modulation scheme as in Figure 6, the results for 128-QAM also corroborate the foregoing performance gains achieved by the proposed QO-STBC over other similarly-configured QO-STBC techniques.For example, at 10 −4 BER for N T = 32 with N R = 1, the proposed QO-STBC design achieves 15 dB better than eigenvalue-based QO-STBC.Similarly, when the receiver diversity is increased from N R = 1 to N R = 2, it can be seen that the proposed scheme achieves 15 dB better than the eigenvalue-based QO-STBC and 12 dB better than the standard QO-STBC.In general, the off-diagonal interfering terms further reduce the performance of the QO-STBC scheme of the eigenvalue-based QO-STBC design.
Note that due to amplitude modulation in QAM modulators, a normalization of the received symbol amplitudes must be observed before demodulation to realize these results.For PSK symbols, there are no bias-energy terms, and thus, it can be more tolerant than the QAM modulators.
PAPR Evaluation of Different QO-STBC Schemes
In this section, we evaluate the performances of these different QO-STBC schemes in our foregoing discussion.We show in Figure 7 the performances of the PAPR metrics of the three QO-STBCs under study using N T = 8 and 128 QAM modulation.From the results, the Hadamard technique proffers better PAPR than the rest EVD and standard techniques in all cases.On the other hand, while the Hadamard technique volunteers a better PAPR than the EVD technique, the EVD QO-STBC is also 1 dB better than the standard QO-STBC scheme, which is slightly better than the conventional OFDM system.Meanwhile, the performance of the Hadamard-based (and similarly, the EVD) QO-STBC system can be improved by adopting any of the well-known PAPR reduction techniques.Such techniques must also appeal to the aim of this work, which is geared towards reducing the complexity of the receiver, as the receiver modules are general small in nature, and this will eliminate the unnecessary depletion of the limited-battery power of such devices.Examples of such light-weight PAPR reduction techniques include companding and iterative clipping and filtering.
Conclusions
In this study, we have proposed and evaluated a simple technique for using eigenvalues or its matrix (modal matrix) to improve QO-STBC system performances so that it can achieve full diversity.Similar matrices of earlier methods are limited in performance by some null-terms of the modal matrix, which further impoverishes the RF chain in terms of PAPR.We suggested and proved that by using Hadamard matrices as the modal matrices, the QO-STBC can achieve linear processing, thus reducing the system complexity, since the detection matrices are diagonal with no off-diagonal ISI terms.Two new proposed methods of constructing QO-STBC codes for maximal diversity gain attainment were explored for up to N R = 8, 16 and 32 antenna configurations enabling MIMO design.While the QO-STBC was used to enable multiple antennas at the transmitter, we introduced MRC at the receiver, which combines the gains from all branches to maximize diversity gain.DHSTBC code provides a method of designing the QO-STBC system, but the detection matrix provides poorer performance due to some extra interfering terms, β + O (2N T ), in the detection matrix.These extra degrading detection terms are absent in the proposed QO-STBC scheme, leading to better performances in terms of BER and PAPR.The results showed that the proposed method consistently outperforms the conventional ISI-free EVD QO-STBC in the order of N T -times the received SNR for all E b /N 0 investigated and increasingly outperformed earlier QO-STBC schemes that used Hadamard matrices.Thus, the interference terms are a limitation in the QO-STBC design as they degrade the true power gains on every antenna at the receiver; especially in earlier DHSTBC.In all of the MIMO cases reported with MRC at the receiver, it is found that the proposed QO-STBC is a better MIMO technique by at least 9 dB at 10 −4 BER.With the style of higher antenna orders discussed, our proposal therefore shows the potential for supporting massive MIMO system configurations.
Figure 1 .
Figure 1.The The 16-QAM results for the full-diversity QO-STBC MIMO system N T = 8 with N R = 1, 2.
H new H new represents an identity matrix impacted (as in the case N R = 1) by the channel gains, such asH 2 F = ∑ | h i,j | 2 .The noise term is rather amplified by H H 1, but in this version, we have extended the study to include N R = 2. Considering the MIMO scheme in (29), both N T and the gain H H new H new influence the amplitude of the received signal.Then, the noise part is amplified by the H H new j ∀j = 1, 2. This is because the EVCM is unitary (see (37)), except that they are scaled by the gains.Notice that ∀N R , H i=1 | 10,269 | 2017-05-11T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Kinetic and Mechanistic Study on Catalytic Decomposition of Hydrogen Peroxide on Carbon-Nanodots/Graphitic Carbon Nitride Composite
The metal-free CDots/g-C3N4 composite, normally used as the photocatalyst in H2 generation and organic degradation, can also be applied as an environmental catalyst by in-situ production of strong oxidant hydroxyl radical (HO·) via catalytic decomposition of hydrogen peroxide (H2O2) without light irradiation. In this work, CDots/g-C3N4 composite was synthesized via an electrochemical method preparing CDots followed by the thermal polymerization of urea. Transmission electron microscopy (TEM), X-Ray diffraction (XRD), Fourier Transform Infrared (FTIR), N2 adsorption/desorption isotherm and pore width distribution were carried out for characterization. The intrinsic catalytic performance, including kinetics and thermodynamic, was studied in terms of catalytic decomposition of H2O2 without light irradiation. The second-order rate constant of the reaction was calculated to be (1.42 ± 0.07) × 10−9 m·s−1 and the activation energy was calculated to be (29.05 ± 0.80) kJ·mol−1. Tris(hydroxymethyl) aminomethane (Tris) was selected to probe the produced HO· during the decomposing of H2O2 as well as to buffer the pH of the solution. The composite was shown to be base-catalyzed and the optimal performance was achieved at pH 8.0. A detailed mechanism involving the adsorb-catalyze double reaction site was proposed. Overall, CDots/g-C3N4 composite can be further applied in advanced oxidation technology in the presence of H2O2 and the instinct dynamics and the mechanism can be referred to further applications in related fields.
Introduction
Advanced oxidation technology (AOT) is one of the most effective and economical approaches dealing with non-biodegradable organic pollutants (NBDOPs) in water, such as dyestuffs, pesticides, pharmaceutical and personal care products (PPCPs), synthetic chemicals and leachate of landfills [1][2][3][4][5].In typical AOTs, different strategies like chemical, photochemical, sonochemical and electrochemical pathways, are employed to produce intermediate active oxidant radicals [1,[6][7][8].With an oxidation potential of 2.7 eV and nanosecond-level life time, hydroxyl radical (HO•) is one of the most typical radicals, which can decompose NBDOPs non-selectively, forming CO 2 , H 2 O, inorganic ions or other biodegradable molecules [9][10][11].It is worthy to note that the degradation of NBDOPs and the generation of HO• take place simultaneously [12].Thus, the core process of various AOTs is to improve the yield of HO•, which mainly leads to the decomposition of NBDOPs.
The concentration of instantaneous HO• can be hardly determined directly but can be determined indirectly by probes like Rhodamine B [13], terephthalic acid [14], dimethyl sulfoxide [15], phenylalanine [16] and Tris(hydroxymethyl)aminomethane (Tris) [17].Among the various probes, Tris can be applied in both homogeneous and heterogeneous systems as HO• scavenger and pH buffer at the same time [18].Hydroxyl radical captures hydrogen atom from Tris, producing formaldehyde (CH 2 O) and other compounds.Since the produced CH 2 O can be quantified by the modified Hantzsch method [19], the concentration of HO• can be indirectly quantified.The detailed mechanism of the reaction between HO• and Tris was reported, involving the effects of O 2 and pH [17].
H 2 O 2 is one of the most common sources of HO• in the presence of metal salt solution, carbon-based species, metal or metal oxide via Fenton/Fenton-like reaction, electron-transfer mechanism or catalytic decomposition on the solid-liquid interface [18,[20][21][22][23][24].The well-known Fenton/Fenton-like reaction may occur in both homogeneous and heterogeneous system according to many works [18,[20][21][22][23]. Similar to the Fenton reaction, HO• and HO 2 • can be formed on the surface of carbon-based catalysts via the electron-transfer mechanism due to the donor-acceptor properties of the carbon surface.The redox cycle is necessary to keep the production of HO• and HO 2 • species [24].The catalytic decomposition of H 2 O 2 on the surface of metal or metal oxides has also been studied to some extent in recent years, including Fe, W, Cu, UO 2 , ZrO 2 , CuO, CuO 2 and so on [18,22,[25][26][27].It is known that HO• and HO 2 • will be formed as intermediates during the decomposition of H 2 O 2 while the disproportion reaction of HO 2 • ends up with H 2 O 2 and O 2 .The disproportionation may also occur in the Fenton/Fenton-like reaction and the reaction between H 2 O 2 and carbon-based catalyst.From previous work, it is known that the reaction between scavenger and HO• will affect the production of O 2 [28].
Despite its high efficiency and effectiveness, the application of classic Fenton reaction faces the disadvantages of strict pH restrictions, iron precipitation and the cost for catalyst recycling [29][30][31].The formation rate of HO• is strongly dependent on the pH value while the oxidation potential of HO• declines as the pH increases [31,32].Furthermore, the generation of HO• is directly limited by the formation of iron sludge in alkaline condition [30].Since iron precipitation remains the bottleneck of classic iron-based Fenton reaction, non-ferrous heterogeneous catalysts with multiple oxidation states and redox stability (Ce, Cu, Mn and Ru) [11] and transition metal substituted iron oxide (Cr, Co and Ti) [32], have been developed for the replacement.Nevertheless, the abovementioned metal materials still face the drawbacks of high cost, high toxicity and/or environmental unfriendliness.Hence, a number of metal-free catalysts have been developed for the generation and/or decomposition of H 2 O 2 regarding to their high earth abundance, good biocompatibility and environment-friendly properties, including graphene [6], carbon nanotubes [33], activated carbon fibers [34], graphitic carbon nitride (g-C 3 N 4 ) [35][36][37][38] and carbon nanodots (CDots) [39].
As a metal-free polymer semiconductor material with suitable band gap and band position, g-C 3 N 4 has embodied its research value in the field of H 2 production, CO 2 reduction, selective oxidation of alcohols and pollutant degradation [40][41][42][43].The combination of CDots and g-C 3 N 4 was firstly introduced in 2015 by J. Liu and her co-workers for water splitting, solving the chock point that g-C 3 N 4 is poisoned by in-situ generated H 2 O 2 in hydrogen evolution [44].H 2 O can be catalytically split into H 2 O 2 and H 2 by g-C 3 N 4 in the presence of photo irradiation.However, with the two-dimensional structure and large accessible area on the surface of g-C 3 N 4 , the in-situ generated hydrogen peroxides are strongly bonded and difficult to remove, which leads to the poisoning of catalyst, thereby limiting the yield of H 2 [45][46][47].CDots was introduced to solve this problem by decomposing the bonded H 2 O 2 on the surface of g-C 3 N 4 into H 2 O and O 2 , thereby remitting the poisoning of g-C 3 N 4 .It is known that intermediate HO• will be formed via electron-transfer on the surface of carbon-based catalyst [24].Inspired from these, it can be hypothesized that the CDots/g-C 3 N 4 composite can be used as a catalyst providing promising yield of HO• via decomposing adsorbed H 2 O 2 on the surface of g-C 3 N 4 by embedded CDots.To the best of our knowledge, the kinetics and mechanism of catalytic decomposition of H 2 O 2 on CDots/g-C 3 N 4 composite has been rarely studied.
In this work, CDots/g-C 3 N 4 composite was synthesized via an electrochemical method followed by a thermal polymerization process.The obtained composites were characterized by TEM, FTIR, Brunauer-Emmett-Teller (B.E.T) and XRD.The catalytic performance of CDots/g-C 3 N 4 composite for H 2 O 2 decomposition was also investigated.The second-order reaction rate constant of H 2 O 2 decomposition and reaction activation energy were obtained by varying the dosage of composite and temperature.Furthermore, a detailed mechanism involving the adsorb-catalyze double reaction sites was proposed.
Morphology of the Catalyst
The obtained CDots/g-C 3 N 4 composite was prepared via an electrochemical method followed by a thermal polymerization process.To confirm the modification of CDots on g-C 3 N 4 , FTIR spectra and XRD patterns of pure g-C 3 N 4 and CDots/g-C 3 N 4 composite were obtained and exhibited in Figure 1A In this work, CDots/g-C3N4 composite was synthesized via an electrochemical method followed by a thermal polymerization process.The obtained composites were characterized by TEM, FTIR, Brunauer-Emmett-Teller (B.E.T) and XRD.The catalytic performance of the CDots/g-C3N4 composite for the H2O2 decomposition was also investigated.The second-order reaction rate constant of H2O2 decomposition and reaction activation energy were obtained by varying the dosage of composite and temperature.Furthermore, a detailed mechanism involving the adsorb-catalyze double reaction sites was proposed.
Morphology of the Catalyst
The obtained CDots/g-C3N4 composite was prepared via an electrochemical method followed by a thermal polymerization process.To confirm the modification of CDots on g-C3N4, FTIR spectra and XRD patterns of pure g-C3N4 and CDots/g-C3N4 composite were obtained and exhibited in Figure 1A,B.The influence of CDots modification on the specific surface area was investigated by the B.E.T. method with isothermal adsorption and desorption of high purity nitrogen.The N2 adsorptiondesorption isotherms and pore size distributions of g-C3N4 and CDots/g-C3N4 composite are shown in Figure 1C.The TEM images of CDots/g-C3N4 are shown in Figure 1D,E.As can be seen in Figure 1A, the sharp peak for g-C3N4 at 810 cm −1 is attributed to stretching vibration bond of tri-s-triazine [48].Vibration peaks between 1200-1650 cm −1 corresponds to the typical stretching modes of CN heterocycles [49].A wider band can be seen at 3100-3300 cm −1 , which belongs to the stretching vibration modes for the unreacted -NH [50].The same characteristic peaks are observed in CDots/g-C3N4 composite and the peak at 1405 cm −1 can be seen as the indicator for the coupling of CDots and g-C3N4 [48].
XRD patterns of g-C3N4 and CDots/g-C3N4 are displayed in Figure 1B.The main diffraction peaks observed at 12.9° and 27.5° in both g-C3N4 and CDots/g-C3N4 composite are indexed to the (100) peak of the in-plane structure of tri-s-triszine unit and (002) crystal facets of the inter-layer stacking of aromatic segments [51,52].The two patterns fit well with graphite carbon nitride (JCPDS 87-1526) and no significant difference is observed, implying the low content of CDots in CDots/g-C3N4 composite.However, it is remarkable that the difference in relative intensity, together with the shift observed in the (002) peak location from 27.51° for g-C3N4 to 27.59° for CDots/g-C3N4, can be seen as the evidence of CDots introduction [45].As can be seen in Figure 1C, the introduction of CDots in CDots/g-C3N4 leads to a 20% increase (from 120.92 to 145.24 m 2 /g) in specific surface area, which favors the decomposition of H2O2.
Figure 1D clearly shows the two-dimensional structure of g-C3N4 together with the embedding of CDots (the white circles).The close look of the CDots embedded in g-C3N4 matrix is given in Figure 1E.The CDots are non-uniformly distributed, ranging from 2 to 10 nm, which is in line with previous studies [45,51,53].
From the results and analysis above, it can be confirmed that CDots have been successfully decorated in g-C3N4 and the inlay of CDots brings an improvement in specific surface area of the composite.
Kinetic Study
The effect of CDots content in the catalyst has been investigated in several previous works proving that a certain amount of CDots can enhance the catalytic properties of the catalysts while an excessive loading may work opposite [45,51,53].Thus, in this work the CDots/g-C3N4 composite was fabricated with a fixed fraction of CDots (1.26 wt.%) selected by preliminary experiments.It is known that the surface reaction is dominating in the present heterogeneous system, therefore surface area to solution volume ratio (SA/V) is used to represent the dosage of the composites other than the mass concentration, which has been applied in many reported works [17,18,22,27,28,[54][55][56].The SA/V value is obtained by combining the mass concentration (g/L) with specific surface area (m 2 /g) and can be normalized to m −1 .As can be seen in Figure 1A, the sharp peak for g-C 3 N 4 at 810 cm −1 is attributed to stretching vibration bond of tri-s-triazine [48].Vibration peaks between 1200-1650 cm −1 corresponds to the typical stretching modes of CN heterocycles [49].A wider band can be seen at 3100-3300 cm −1 , which belongs to the stretching vibration modes for the unreacted -NH [50].The same characteristic peaks are observed in CDots/g-C 3 N 4 composite and the peak at 1405 cm −1 can be seen as the indicator for the coupling of CDots and g-C 3 N 4 [48].
XRD patterns of g-C 3 N 4 and CDots/g-C 3 N 4 are displayed in Figure 1B.The main diffraction peaks observed at 12.9 • and 27.5 • in both g-C 3 N 4 and CDots/g-C 3 N 4 composite are indexed to the (100) peak of the in-plane structure of tri-s-triszine unit and (002) crystal facets of the inter-layer stacking of aromatic segments [51,52].The two patterns fit well with graphitic carbon nitride (JCPDS 87-1526) and no significant difference is observed, implying the low content of CDots in CDots/g-C 3 N 4 composite.However, it is remarkable that the difference in relative intensity, together with the shift observed in the (002) peak location from 27.51 • for g-C 3 N 4 to 27.59 • for CDots/g-C 3 N 4 , can be seen as the evidence of CDots introduction [45].As can be seen in Figure 1C, the introduction of CDots in CDots/g-C 3 N 4 leads to a 20% increase (from 120.92 to 145.24 m 2 /g) in specific surface area, which favors the decomposition of H 2 O 2 .
Figure 1D clearly shows the two-dimensional structure of g-C 3 N 4 together with the embedding of CDots (the white circles).The close look of the CDots embedded in g-C 3 N 4 matrix is given in Figure 1E.The CDots are non-uniformly distributed, ranging from 2 to 10 nm, which is in line with previous studies [45,51,53].
From the results and analysis above, it can be confirmed that CDots have been successfully decorated in g-C 3 N 4 and the inlay of CDots brings an improvement in specific surface area of the composite.
Kinetic Study
The effect of CDots content in the catalyst has been investigated in several previous works proving that a certain amount of CDots can enhance the catalytic properties of the catalysts while an excessive loading may work opposite [45,51,53].Thus, in this work the CDots/g-C 3 N 4 composite was fabricated with a fixed fraction of CDots (1.26 wt.%) selected by preliminary experiments.It is known that the surface reaction is dominating in the present heterogeneous system, therefore surface area to solution volume ratio (SA/V) is used to represent the dosage of the composites other than the mass concentration, which has been applied in many reported works [17,18,22,27,28,[54][55][56].The SA/V value is obtained by combining the mass concentration (g/L) with specific surface area (m 2 /g) and can be normalized to m −1 .
To verify the synergistic effect of CDots and g-C 3 N 4 in H 2 O 2 decomposition, five samples were prepared according to the proportion of CDots and g-C 3 N 4 in the composite.Sample 5 is the CDots/g-C 3 N 4 composite with SA/V of 4 × 10 5 m −1 .The pure g-C 3 N 4 with the same SA/V value was identified as Sample 2. The single-component CDots solution containing the equivalent amount of CDots as Sample 5 (13.4 mg/L) was named as Sample 1 and it was obtained by directly diluting the originally prepared CDots solution.The physical mixture of Sample 1 and 2 is identified as Sample 3. Additionally, the originally prepared CDots solution with a high concentration (133 mg/L) was named as Sample 4.
Detailed descriptions of the samples are listed in Table 1.It should be noted that sample 1, 3 and 5 has equivalent amount of CDots and sample 3 is a simple mixture of CDots and g-C 3 N 4 while sample 5 is the composite.The catalytic decomposition of H 2 O 2 by each sample was investigated under the same experimental condition where initial concentration of of each case is plotted against reaction time respectively (shown in Figure 2).To verify the synergistic effect of CDots and g-C3N4 in H2O2 decomposition, five samples were prepared according to the proportion of CDots and g-C3N4 in the composite.Sample 5 is the CDots/g-C3N4 composite with SA/V of 4 × 10 5 m −1 .The pure g-C3N4 with the same SA/V value was identified as Sample 2. The single-component CDots solution containing the equivalent amount of CDots as Sample 5 (13.4 mg/L) was named as Sample 1 and it was obtained by directly diluting the originally prepared CDots solution.The physical mixture of Sample 1 and 2 is identified as Sample 3. Additionally, the originally prepared CDots solution with a high concentration (133 mg/L) was named as Sample 4.
Detailed descriptions of the samples are listed in Table 1.It should be noted that sample 1, 3 and 5 has equivalent amount of CDots and sample 3 is simply mixture of CDots and g-C3N4 while sample 5 is the composite.The catalytic decomposition of H2O2 by each sample was investigated under the same experimental condition where initial concentration of H2O2 ([H2O2]0) is 0.5 mM and the temperature is 298K.Normalized concentration of H2O2 ([H2O2]/[H2O2]0) of each case is plotted against reaction time respectively (shown in Figure 2).As shown in Figure 2, Sample 4 (the originally prepared CDots, 133 mg/L) incurs a graded decline in H2O2 concentration in darkness, demonstrating the inherent catalytic property of CDots for H2O2 decomposition.However, the catalytic performance becomes faint after diluting CDots to a 10.2% concentration when comparing sample 1 and 4, indicating that such catalysis process is strongly dependent on the applied CDots concentration, which is in accordance with reported work [57].Sample 2 (pure g-C3N4) also incurs slight decline in H2O2, which can be attributed to the adsorption of g-C3N4 and catalytic decomposition by carbon-based material through the delocalization of electrons on the surface [24].However, the decomposition of H2O2 catalyzed by the pure g-C3N4 should not be considered as the main process in the system with CDots/g-C3N4 composite.It is remarkable that the consumption rate of H2O2 for sample 5 is much larger than sample 3, implying the thermal polymerization process gives rise to the remarkable synergy and the proximity between the CDots and the adsorption sites of H2O2 in g-C3N4 is necessary for the high efficiency of H2O2 decomposition [44].As shown in Figure 2, Sample 4 (the originally prepared CDots, 133 mg/L) incurs a gradely decline in H 2 O 2 concentration in darkness, demonstrating the inherent catalytic property of CDots for H 2 O 2 decomposition.However, the catalytic performance becomes faint after diluting CDots to a 10.2% concentration when comparing sample 1 and 4, indicating that such catalysis process is strongly dependent on the applied CDots concentration, which is in accordance with reported work [57].Sample 2 (pure g-C 3 N 4 ) also incurs slight decline in H 2 O 2 , which can be attributed to the adsorption of g-C 3 N 4 and catalytic decomposition by carbon-based material through the delocalization of electrons on the surface [24].However, the decomposition of H 2 O 2 catalyzed by the pure g-C 3 N 4 should not be considered as the main process in the system with CDots/g-C 3 N 4 composite.It is remarkable that the consumption rate of H 2 O 2 for sample 5 is much larger than sample 3, implying the thermal polymerization process gives rise to the remarkable synergy and the proximity between the CDots and the adsorption sites of H 2 O 2 in g-C 3 N 4 is necessary for the high efficiency of H 2 O 2 decomposition [44].
It has been previously reported that [18,54], the catalytic decomposition of H 2 O 2 in the heterogeneous system follows pseudo first-order kinetics with respect to H 2 O 2 when solid is excess to H 2 O 2 and the reaction rate equation can be described as where where k 2 denotes the second-order reaction rate constant, SA denotes the surface area of the CDots/g-C 3 N 4 and V is the volume of the reaction solution.The term SA/V has been applied to denote the catalyst concentration in a number of studies regarding the heterogeneous catalysis system [18,22,27,54].
According to the preliminary experiments, the lower limit of SA/V is 3.2 × 10 5 m −1 after which it is excess to the fixed initial H 2 O 2 concentration (0.5 mM).A series of experiments were carried out by varying the dosage of catalyst (SA/V) from 3.2 to 6.4 × 10 5 m −1 under the same condition at 298 K to explore the kinetics of the present system.The logarithm of normalized H 2 O 2 concentration is plotted as a function of reaction time (Figure 3A) and the slope of the linearly fitted curve of these plots (k 1 ) is plotted against SA/V accordingly (Figure 3B).
It has been previously reported that [18,54], the catalytic decomposition of H2O2 in the heterogeneous system follows pseudo first order kinetics with respect to H2O2 when solid is excess to H2O2 and the reaction rate equation can be described as k H O , which can be integrated as where k1 is the pseudo first order rate constant at a given temperature and dosage of the solid, t is the reaction time, [H2O2] is the concentration of H2O2 at a time and [H2O2]0 is the concentration of H2O2 at t = 0.When the solid catalyst is excess to H2O2, the second order rate constant in the system can be determined by studying the pseudo first order rate constant (k1) as a function of SA/V (surface area of solid to volume of solution).The second order rate expression is given as where k2 denotes the second-order reaction rate constant, SA denotes the surface area of the CDots/g-C3N4 and V is the volume of the reaction solution.The term SA/V has been applied to denote the catalyst concentration in a number of studies regarding the heterogeneous catalysis system [18,22,27,54].
According to the preliminary experiments, the lower limit of SA/V is 3.2 × 10 5 m −1 after which it is excess to the fix initial H2O2 concentration (0.5 mM).A series of experiments were carried out by varying the dosage of catalyst (SA/V) from 3.2 to 6.4 × 10 5 m −1 under the same condition at 298 K to explore the kinetics of the present system.The logarithm of normalized H2O2 concentration is plotted as a function reaction time (Figure 3A) and the slope of the linearly fitted curve of these plots (k1) is plotted against SA/V accordingly (Figure 3B).From Figure 3A, it can be seen that all the ln([H 2 O 2 ]/[H 2 O 2 ] 0 ) plots are linearly fitted with reaction time which indicates it follows pseudo first-order kinetics at given dosage of the composite and the slopes of the fitted curves are denoted as k 1 .In addition, it is clearly that the observed decomposition rate of H 2 O 2 increases with increasing the dosage of the composite.The key parameters of the fitted curves, including SA/V, slopes (k 1 ), standard deviation and R 2 , are listed in Table 2.The obtained k 1 values from Table 2 were plotted in Figure 3B against SA/V.As can be seen from Figure 3B, k 1 is linearly correlated to SA/V in the range of 3.2-6.4× 10 5 m −1 and the slope of the fitted curve is calculated as (1.42 ± 0.07) × 10 −9 m•s −1 which can be denoted as the overall second-order rate constant.This value is far from the rate constant of a diffusion controlled reaction in the order of 10 −5 m•s −1 but still higher than some metal oxide catalysts like ZrO 2 ((2.39 ± 0.09) × 10 −10 m•s −1 ), CuO ((1.23 ± 0.06) × 10 −9 m•s −1 ) and Gd 2 O 3 ((9.4± 1.0) × 10 −10 m•s −1 ) [18,54].
Generally, the first-order rate constant k 1 is strongly related to the reaction temperature according to the Arrhenius equation: where E a denotes the activation energy for the reaction, R is the gas constant, T is the absolute temperature and A is the pre-exponential factor.The logarithm of k 1 obtained by plotting ln([H 2 O 2 ]/[H 2 O 2 ] 0 ) against T (shown in Figure 4A) is plotted as a function of 1/T in Figure 4B so as to calculate E a .From Figure 3A, it can be seen that all the ln([H2O2]/[H2O2]0) plots are linearly fitted with reaction time which indicates it follows pseudo first order kinetics at given dosage of the composite and the slopes of the fitted curves are denoted as k1.In addition, it is clearly that the observed decomposition rate of H2O2 increases with increasing the dosage of the composite.The key parameters of the fitted curves, including SA/V, slopes (k1), standard deviation and R 2 , are listed in Table 2.The obtained k1 values from Table 2 were plotted in Figure 3B against SA/V.As can be seen from Figure 3B, k1 is linearly correlated to SA/V in the range of 3.2-6.4× 10 5 m −1 and the slope of the fitted curve is calculated as (1.42 ± 0.07) × 10 −9 m•s −1 which can be denoted as the overall second-order rate constant.This value is far from the rate constant of a diffusion controlled reaction in the order of 10 −5 m•s −1 but still higher than some metal oxide catalysts like ZrO2 ((2.39 ± 0.09) × 10 −10 m•s −1 ), CuO ((1.23 ± 0.06) × 10 −9 m•s −1 ) and Gd2O3 ((9.4 ± 1.0) × 10 −10 m•s −1 ) [18,54].
The key parameters of the fitted curves, including T, pseudo-first order reaction rate constants (k1), standard deviation and R 2 , are listed in Table 3.
The Effect of pH
To investigate the mechanism of the present system containing H2O2 and CDots/g-C3N4 composite, it is significant to study the pH effect as well as quantify the in-situ produced hydroxyl radicals.Due to its scavenging capacity against HO• and pH buffering ability, Tris is chosen to carry out the mechanistic study.The pKa and the buffering range of Tris are 8.07 and 7.0-9.0respectively, so the pH values were selected within this range.The decline of H2O2 together with the production of CH2O against the reaction time with different pH are exhibited in Figure 5.It can be seen from Figure 4B, ln(k 1 ) is linearly dependent on 1/T and the slope of the fitted curve is obtained.Based on the slope, the activation energy of the reaction is calculated to be (29.05± 0.80) kJ•mol −1 , which is to some extent lower than a series of metal oxides developed before, including ZrO 2 ((33 ± 1) kJ•mol −1 ), TiO 2 ((37 ± 1) kJ•mol −1 ), Y 2 O 3 ((47 ± 5) kJ•mol −1 ), Fe 2 O 3 ((47 ± 1) kJ•mol −1 ), CuO ((76 ± 1) kJ•mol −1 ), CeO 2 ((40 ± 1) kJ•mol −1 ), Gd 2 O 3 ((63 ± 1) kJ•mol −1 ) and HfO 2 ((60 ± 1) kJ•mol −1 ) [18,54,58].
The key parameters of the fitted curves, including T, pseudo first-order reaction rate constants (k 1 ), standard deviation and R 2 , are listed in Table 3.
The Effect of pH
To investigate the mechanism of the present system containing H 2 O 2 and CDots/g-C 3 N 4 composite, it is significant to study the pH effect as well as quantify the in-situ produced hydroxyl radicals.Due to its scavenging capacity against HO• and pH buffering ability, Tris is chosen to carry out the mechanistic study.The pKa and the buffering range of Tris are 8.07 and 7.0-9.0respectively, so the pH values were selected within this range.The decline of H 2 O 2 together with the production of CH 2 O against the reaction time with different pH are exhibited in Figure 5.It can be clearly seen in Figure 5 that the decomposition rate of H2O2 increases with pH increases in the whole range.However, the formation of CH2O shows a different trend.The evolution of CH2O remains relatively low in neutral condition (pH 7), then it starts to accelerate until pH 8, after which the formation rate of CH2O declines as pH increases.This indicates that the formation of CH2O and probably the production of HO• are alkaline favored while the production is to some degree related to the pKa of Tris [54].
To figure out whether the production of HO• is also pH dependent, it is necessary to introduce the yield (Y) of CH2O formed by HO• and Tris.Y is defined by the equation: where [HO•] is the production of HO• and [CH2O] is the accumulated CH2O in H2O2 decomposition experiment.According to a previous study using γ-radiation in homogeneous system [17], the yield (Y) of CH2O increases from 25% to 51% as increasing pH from 7.0 to 9.0.Provided that the yield (Y) in heterogeneous system is consistent with that in homogeneous system, the production of HO• in H2O2 decomposition on CDots/g-C3N4 composite can be estimated by this value together with the final concentration of CH2O.The results are shown in Figure 6.According to a previous study using γ-radiation in homogeneous system [17], the yield (Y) of CH 2 O increases from 25% to 51% as increasing pH from 7.0 to 9.0.Provided that the yield (Y) in heterogeneous system is consistent with that in homogeneous system, the production of HO• in H 2 O 2 decomposition on CDots/g-C 3 N 4 composite can be estimated by this value together with the final concentration of CH 2 O.The results are shown in Figure 6.It can be clearly seen in Figure 5 that the decomposition rate of H2O2 increases with pH increases in the whole range.However, the formation of CH2O shows a different trend.The evolution of CH2O remains relatively low in neutral condition (pH 7), then it starts to accelerate until pH 8, after which the formation rate of CH2O declines as pH increases.This indicates that the formation of CH2O and probably the production of HO• are alkaline favored while the production is to some degree related to the pKa of Tris [54].
To figure out whether the production of HO• is also pH dependent, it is necessary to introduce the yield (Y) of CH2O formed by HO• and Tris.Y is defined by the equation: where [HO•] is the production of HO• and [CH2O] is the accumulated CH2O in H2O2 decomposition experiment.According to a previous study using γ-radiation in homogeneous system [17], the yield (Y) of CH2O increases from 25% to 51% as increasing pH from 7.0 to 9.0.Provided that the yield (Y) in heterogeneous system is consistent with that in homogeneous system, the production of HO• in H2O2 decomposition on CDots/g-C3N4 composite can be estimated by this value together with the final concentration of CH2O.The results are shown in Figure 6.As appeared in Figure 6, the plots of estimated production of HO• exhibit the similar tendency as that of final production of CH 2 O and it still peaks at pH 8.0 with the maximum concentration of 2.10 mM ([HO•] = [CH 2 O]/Y where Y = 37.5% at pH 8.0).This means the stoichiometry between H 2 O 2 and CH 2 O is approximately 1:0.158 and 42.1% of H 2 O 2 ([H 2 O 2 ] 0 = 5 mM, Tris is excess to H 2 O 2 ) end up with HO• at the optimal pH.The efficiency of H 2 O 2 consumption towards HO• is much higher as compared to that of the H 2 O 2 /ZrO 2 /Tris system (13.4% at pH 8.0) [17].Therefore, it can be concluded that the production of HO• also pH-dependent and there is an optimal pH which may have something to do with the Tris [22].
Based on the results in present work, the intrinsic chemical catalytic properties of the synthesized CDots/g-C 3 N 4 composite, other than the photocatalytic properties, have been revealed to some extent.The hypothesis in the introduction section could be confirmed as following: firstly, as demonstrated in Figure 2, the pure CDots synthesized via an electrochemical pathway showed excellent catalytic ability, in line with the literature [59]; secondly, similar as the reported property of g-C 3 N 4 [60], Figure 2 also shows that the concentration of H 2 O 2 in the solution decreases slightly in the presence of the pure g-C 3 N 4 which indicates g-C 3 N 4 does provide sufficient reaction site for H 2 O 2 to adsorb; thirdly, by analyzing the results in Figures 2, 3, 5 and 6, it is known that hydroxyl radical can be formed during the decomposition of H 2 O 2 catalyzed by the CDots/g-C 3 N 4 composite in the heterogeneous system as hypothesized in the former section.Besides the strong affinity of g-C 3 N 4 towards H 2 O 2 and the catalytic property of CDots against the adsorbed H 2 O 2 [47], the delocalization of the electrons on the surface of g-C 3 N 4 may also leads to the decomposition of H 2 O 2 via electron-transfer mechanism [24].In conclusion, the synthesized CDots/g-C 3 N 4 composite exhibits the synergy of adsorption of H 2 O 2 and delocalization of electrons on g-C 3 N 4 and catalytic decomposition of H 2 O 2 by g-C 3 N 4 and CDots producing hydroxyl radicals.
Based on the results and discussion above, the mechanism of catalytic decomposition of H 2 O 2 in the heterogeneous system with CDots/g-C 3 N 4 composite is proposed and illustrated in Figure 7.As appeared in Figure 6, the plots of estimated production of HO• exhibit the similar tendency as that of final production of CH2O and it still peaks at pH 8.0 with the maximum concentration of 2.10 mM ([HO•] = [CH2O]/Y where Y = 37.5% at pH 8).This means the stoichiometry between H2O2 and CH2O is approximately 1:0.158 and 42.1% of H2O2 ([H2O2]0 = 5 mM, Tris is excess to H2O2) end up with HO• at the optimal pH.The efficiency of H2O2 consumption towards HO• is much higher as compared to that of the H2O2/ZrO2/Tris system (13.4% at pH 8) [17].Therefore, it can be concluded that the production of HO• is also pH-dependent and there is an optimal pH which may have something to do with the Tris [22].
Based on the results in present work, the intrinsic chemical catalytic properties of the synthesized CDots/g-C3N4 composite, other than the photocatalytic properties, have been revealed to some extent.The hypothesis in the introduction section could be confirmed as following: firstly, as demonstrated in Figure 2, the pure CDots synthesized via an electrochemical pathway showed excellent catalytic ability, in line with the literature [59]; secondly, similar as the reported property of g-C3N4 [60], Figure 2 also shows that the concentration of H2O2 in the solution decreases slightly in the presence of the pure g-C3N4 which indicates g-C3N4 does provide sufficient reaction site for H2O2 to adsorb; thirdly, by analyzing the results in Figures 2, 3, 5 and 6, it is known that hydroxyl radical can be formed during the decomposition of H2O2 catalyzed by the CDots/g-C3N4 composite in the heterogeneous system as hypothesized in the former section.Besides the strong affinity of g-C3N4 towards H2O2 and the catalytic property of CDots against the adsorbed H2O2 [47], the delocalization of the electrons on the surface of g-C3N4 may also leads to the decomposition of H2O2 via electrontransfer mechanism [24].In conclusion, the synthesized CDots/g-C3N4 composite exhibits the synergy of adsorption of H2O2 and delocalization of electrons on g-C3N4 and catalytic decomposition of H2O2 by g-C3N4 and CDots producing hydroxyl radicals.
Based on the results and discussion above, the mechanism of catalytic decomposition of H2O2 in the heterogeneous system with CDots/g-C3N4 composite is proposed and illustrated in Figure 7. Key of the mechanism is the so-called adsorb-catalyze double reaction sites.With plenty of accessible adsorption sites on the surface of g-C3N4, CDots/g-C3N4 composite shows high selective adsorption ability towards aqueous H2O2.From previous works studying similar heterogeneous system with H2O2 and solid catalyst [18,22,54], it is known that H2O2 concentration exhibits an initial drop indicating the adsorption on the surface of catalyst, after which the H2O2 decomposition obeys pseudo first-order kinetic when the surface reaches equilibrium state.As can be seen in Figure 3A, the H2O2 concentration follows the similar trend and kinetics.Hence, it can be demonstrated that the adsorption of H2O2 is also dominating in the initial short period.Tris was introduced in the present work as a probe of hydroxyl radical and pH buffer.It is known that Tris can be partially oxidized to CH2O and other byproducts and the ratio between the concentration of hydroxyl radicals and formed Key of the mechanism is the so-called adsorb-catalyze double reaction sites.With plenty of accessible adsorption sites on the surface of g-C 3 N 4 , CDots/g-C 3 N 4 composite shows high selective adsorption ability towards aqueous H 2 O 2 .From previous works studying similar heterogeneous system with H 2 O 2 and solid catalyst [18,22,54], it is known that H 2 O 2 concentration exhibits an initial drop indicating the adsorption on the surface of catalyst, after which the H 2 O 2 decomposition obeys pseudo first-order kinetic when the surface reaches equilibrium state.As can be seen in Figure 3A, the H 2 O 2 concentration follows the similar trend and kinetics.Hence, it can be demonstrated that the adsorption of H 2 O 2 is also dominating in the initial short period.Tris was introduced in the present work as a probe of hydroxyl radical and pH buffer.It is known that Tris can be partially oxidized to CH 2 O and other byproducts and the ratio between the concentration of hydroxyl radicals and formed CH 2 O is relatively fixed under given condition (pH and dissolved oxygen concentration) [17,27].Therefore, the formation of CH 2 O can be used to probe the formed hydroxyl radicals in the heterogeneous system containing H 2 O 2 and CDots/g-C 3 N 4 composite.It is in line with previous works that [17,18,22,27], CH 2 O formation is reflected by the decomposition of H 2 O 2 and is pH-dependent (Figures 5 and 6).Hence, it can be deduced that after the initial period, the adsorbed H 2 O 2 on the surface of the CDots/g-C 3 N 4 composite reaches an equilibrium state and the decomposition of H 2 O 2 catalyzed by embedded CDots on the surface sites turns the dominant role.During this procedure, large quantities of HO• was produced, exhibiting strong oxidation ability towards scavengers like Tris.It should be noted that the production of HO• is strongly pH dependent.To sum up, the CDots/g-C 3 N 4 composite shows synergetic effect on the decomposition of H 2 O 2 via adsorb-catalyze double reaction sites and more importantly, is proved to be a promising catalyst for the degradation of NBDOPs since it is a metal-free pathway of producing HO• efficiently.
Instrumentation
The morphology and microstructure of samples were observed by JET-2100F (JEOL, Wuhan, China) transmission electron microscope (TEM).The Fourier transform infrared spectroscopy (FTIR) of the samples were recorded by Nicolet iS5 (Thermo Fisher Scientific, Wuhan, China) FTIR spectrometer with KBr pellets.The specific surface area of CDots/g-C 3 N 4 composite and pure g-C 3 N 4 were determined by the Brunauer-Emmett-Teller (B.E.T) method via isothermal adsorption and desorption of high purity nitrogen using a TriStar II 3020 (Micromeritics, Wuhan, China) instrument.X-ray diffraction (XRD) patterns were recorded with D8 advance (Bruker, Wuhan, China) diffractometer using Bragg-Brentano geometry in the 2θ angle from 10 • to 40 • and Cu Kα irradiation (λ = 1.54 Å).The samples were weighted to ± 10 −4 g in a ME104E (Mettler Toledo, Wuhan, China) microbalance.UV/Vis spectra were collected by V-5600 (METASH, Wuhan, China) and UV-5500PC (METASH, Wuhan, China) spectrophotometer.The pH of reaction solution was measured by PHS-3C (YOKE, Wuhan, China) pH meter with an accuracy of ± 0.01 pH units.
Reagents and Experiments
All the solutions used in this study were prepared using deionized water.Preparation of the catalyst: CDots were synthesized via an electrochemical method based on previous reported work [59].In a typical preparation process, two graphite rods were insert parallel into 300 mL ultrapure water as electrodes with a separation of 7.5 cm and 4 cm depth under water.60 V static potentials were applied to the rods by a direct-current (DC).After electrolyzing for 120 h, the anode graphite rod corroded and a dark brown solution was formed.The solution was filtered with slow-speed quantitative filter paper and then centrifuged at 10,000 rpm for 10 min.Finally, the soluble CDots was obtained and the concentration can be quantified by drying and weighting.
A thermal polymerization method was applied for the synthesis of pure g-C 3 N 4 [61].Typically, 40 g urea (CAS[57-13-6], 99%, Sinopharm, Wuhan, China) was dissolved in 40 mL ultrapure water in a quartz crucible, then heated to 550 • C with the rate of 7 • C/min in a muffle furnace and kept at 550 • C for 2 h.After naturally cooling down to room temperature, the resultant yellow product was collected and ground into powder to obtain pure g-C 3 N 4 .CDots/g-C 3 N 4 composite was synthesized via in-situ thermal polymerization [51].Following the same procedure, 40 g urea was dissolved in 40 [17].The decomposition experiments were carried out in 100 mM Tris solution with fixed quantities of CDots/g-C 3 N 4 (SA/V = 4 × 10 5 m −1 , SA and V stands for the surface area of solid and the volume of solution) and H 2 O 2 (5 mM).The pH values of solution were selected within the valid buffering range of Tris, namely pH 7.0-9.0for investigating the effect of pH.The produced CH 2 O was quantitatively determined by a modified Hantzsch method, where CH 2 O reacts with AAA in the presence of NH 4 Ac to form a dihydropyridine derivative with a maximum absorbance wavelength at 368 nm [19].The calibration curve where the absorbance of produced dihydropyridine derivative was plotted as a function of CH 2 O concentration with linear correlation was obtained at 368 nm in the range of 0.04-1.3mM CH 2 O for the conversion of absorbance to CH 2 O concentration.The experimental error in the determination of CH 2 O was less than 2%.
Conclusions
In this work, a promising catalyst CDots/g-C 3 N 4 composite for degradation of non-biodegradable organic pollutants was synthesized via a two-step pathway including electrochemical exfoliation of graphite rod preparing CDots and thermal polymerization of CDots mixed urea.Through different characterization methods and kinetic experiments, it has been confirmed that CDots embed in g-C 3 N 4 matrix and such structure accounts for the synergetic catalytic performance of the composite.Kinetics of catalytic decomposition of H 2 O 2 on CDots/g-C 3 N 4 composite were researched.The second-order rate constant (k 2 ) was measured to be (1.42 ± 0.07) × 10 −9 m•s −1 and the activation energy of the reaction was measured to be (29.05± 0.80) kJ•mol −1 under the applied conditions.The effect of pH (pH 7.0-9.0)on the production of HO• was also investigated by using Tris as a probe.It has been shown that the production of HO• is strongly alkaline dependent and the maximum reaches at pH 8.0 which is close to the pKa of Tris.A mechanism based on the adsorb-catalyze double reaction site theory has been proposed.This work implies that the photocatalyst (CDots/g-C 3 N 4 composite) for water splitting or H 2 evolution may also be applied as an alternative catalyst in degradation of non-biodegradable organic pollutants.The instinct kinetics and the mechanism can be referred to for further applications in related fields.
,B.The influence of CDots modification on the specific surface area was investigated by the B.E.T. method with isothermal adsorption and desorption of high purity nitrogen.The N 2 adsorption-desorption isotherms and pore size distributions of g-C 3 N 4 and CDots/g-C 3 N 4 composite are shown in Figure 1C.The TEM images of CDots/g-C 3 N 4 are shown in Figure 1D,E.Catalysts 2018, 8, x FOR PEER REVIEW 3 of 15
Figure 6 .
Figure 6.Final production of CH2O and estimated production of HO• as a function of pH.The final productions of CH2O were extracted from Figure 5.The estimated [HO•] = [CH2O]/Y and the yields (Y) were derived from a previous work [17].
Figure 5 .
Figure 5.The concentration of H 2 O 2 ([H 2 O 2 ] 0 = 5 mM) and CH 2 O as a function of reaction time in the presence of Tris ([Tris] 0 = 100 mM) with the catalyst dosage of 4.0 × 10 5 m −1 at 298.15K in pH from 7.0 to 9.0.It can be clearly seen in Figure 5 that the decomposition rate of H 2 O 2 increases with pH increases in the whole range.However, the formation of CH 2 O shows a different trend.The evolution of CH 2 O remains relatively low in neutral condition (pH 7.0), then it starts to accelerate until pH 8.0, after which the formation rate of CH 2 O declines as pH increases.This indicates that the formation of CH 2 O and probably the production of HO• are alkaline favored while the production is to some degree related to the pKa of Tris [54].To figure out whether the production of HO• is also pH dependent, it is necessary to introduce the yield (Y) of CH 2 O formed by HO• and Tris.Y is defined by the equation: Y = [CH 2 O]/[HO•] (4) where [HO•] is the production of HO• and [CH 2 O] is the accumulated CH 2 O in H 2 O 2 decomposition experiment.According to a previous study using γ-radiation in homogeneous system[17], the yield (Y) of CH 2 O increases from 25% to 51% as increasing pH from 7.0 to 9.0.Provided that the yield (Y) in heterogeneous system is consistent with that in homogeneous system, the production of HO• in H 2 O 2 decomposition on CDots/g-C 3 N 4 composite can be estimated by this value together with the final concentration of CH 2 O.The results are shown in Figure6.
Figure 6 .
Figure 6.Final production of CH2O and estimated production of HO• as a function of pH.The final productions of CH2O were extracted from Figure 5.The estimated [HO•] = [CH2O]/Y and the yields (Y) were derived from a previous work [17].
Figure 6 .
Figure 6.Final production of CH 2 O and estimated production of HO• as a function of pH.The final productions of CH 2 O were extracted from Figure 5.The estimated [HO•] = [CH 2 O]/Y and the yields (Y) were derived from a previous work [17].
Figure 7 .
Figure 7.The mechanism of catalytic decomposition of H 2 O 2 on CDots/g-C 3 N 4 composite.
k 1 is the pseudo first-order rate constant at a given temperature and dosage of the solid, t is the reaction time, [H 2 O 2 ] is the concentration of H 2 O 2 at a time and [H 2 O 2 ] 0 is the concentration of H 2 O 2 at t = 0.When the solid catalyst is excess to H 2 O 2 , the second-order rate constant in the system can be determined by studying the pseudo first-order rate constant (k 1 ) as a function of SA/V (surface area of solid to volume of solution).The second-order rate expression is given as
Table 2 .
The key parameters of the fitted curves with different SA/V values.
Table 2 .
The key parameters of the fitted curves with different SA/V values.
Table 3 .
The key parameters of the fitted curves with different temperatures.
Table 3 .
The key parameters of the fitted curves with different temperatures. | 11,100.6 | 2018-10-11T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Environmental Science"
] |
Production of Hematite Micro- and Nanoparticles in a Fluidized Bed Process—Mechanism Study †
A continuous, compact and simple process was developed to synthesize micro- and nanoparticles of iron oxide. The process combines the spraying (pulverization) of an aqueous solution of iron nitrate in a fluidized bed reactor containing coarse and hot glass beads ( T = 200 °C) for the production of solids and a transported bed reactor for calcination ( T = 490 °C). The intermediate product formed in the fluidized bed reactor is 2-line ferrihydrite, while the calcination reactor allows the production of hematite micro- and nanoparticles. These particles are characterized by a narrow size distribution, a mean size of 0.5 μm, a specific surface area of 24 m 2 g –1 and a density of 4499 kg m –3 . Particles are made up of small clusters of crystallites having an average size of 47 nm and a low internal porosity (0.12). The reaction mechanism was studied using a muffle furnace and a lab convective dryer. It was found that several steps are involved leading first to the production of iron nitrate dihydrate after the removal of the solution water, as well as two and then five molecules of water of hydration. After that, the elimination of nitrate leads to the production of ferrihydrite. Finally, ferrihydrite is transformed into hematite due to the removal of residual nitrate and water of hydroxylation.
Introduction
The production of nanometric particles is of great interest for various industrial sectors since they have attractive properties. In particular, magnetite, hematite and maghemite nanoparticles present interesting magnetic, electrical and optical properties. They are used for wastewater treatment, biomedical and catalytic applications, or as pigments for painting. Moreover, they are intensively investigated for applications in gas sensors, lithium batteries and cosmetic products.
Conventional processes for the synthesis of hematite nanoparticles, such as chemical precipitation (Lunin A.V. et al., 2019), hydrothermal synthesis (Zhu M. et al., 2012), forced hydrolysis (Wang W. et al., 2008), sol-gel (Pawar M.J. and Khajone A.D., 2012) and microemulsion (Housaindokht M.R. and Pour A.N., 2011), comprise numerous steps: precipitation and crystallization, filtration, washing, drying and dry grinding. All these operations need numerous devices which may lead to a high cost of production and may affect the reproducibility of the product properties. Other methods such as thermal decomposition (Glasgow W. et al., 2016) or spray pyrolysis (Ozcelik B.K. and Ergun C., 2015) were proposed as a continuous process. However, the production capability of some of them is limited.
In previous studies carried out in our laboratory (Pont V. et al., 2001;Hémati M. et al., 2003), fine particles were produced using a hot fluidized bed reactor containing coarse and hydrophobic beads with a diameter of over 500 μm. Aqueous solutions containing organic or inorganic salts were pulverized within the reactor. The solution was then dried at the contact of the hot beads, leading to the formation of nano-and microparticles which were then removed from the beads' surface due to a strong mixing of the fluidized medium under the effect of the fluidizing gas. This operation led to the production of particles having a nature identical to that of the products in solution.
In the study presented here, we propose to add a step which allows transformation of the particles leaving the fluidized bed. For this purpose, we implemented calcination after the particles' production in the fluidized bed, in order to produce hematite particles using a precursor solution of iron nitrate nonahydrate dissolved in water.
The objectives are to characterize the products formed in the process, and to understand and analyze the various phenomena occurring in the generation and calcination reactors. The manner in which the particles are formed in the process is also explained. Finally, a mechanism is proposed, defining the different reaction steps appearing during the transformation of the iron nitrate solution.
Fluidized bed process
The process used in this study is presented in Fig. 1. The fluidized bed reactor is a vertical stainless steel column with an inner diameter of 0.1 m and a height of 0.5 m, filled up to half its capacity with 1.5 kg of glass beads (diameter = 1.4-1.6 mm) and topped by a conical freeboard section. This conical section is closed by a lid equipped with a guide tube for the introduction of a spraying system and an exit for evacuating gas and particles. The column is provided at its base with a vine box permitting air homogenization. The fluidization gas distribution is ensured by a distributor made up of a perforated stainless steel plate with a porosity of 2 %, under which a metallic grid of low opening is fixed, whose role is to prevent the fine particles from passing through the distributor. Before entering the bed, the fluidizing air flow rate is measured by means of rotameters and the air is preheated by an electrical heater (4 kW). The precursor solution (aqueous solution of iron nitrate nonahydrate) is stored in a reservoir placed on a balance in order to control the flow rate of the solution. It is drawn up by a volumetric pump from the reservoir to an internal mixing two-fluid spray nozzle (Spray System Co.). The atomizing gas (air) flow rate is controlled by a middle valve and measured by a rotameter. The atomizer is a downward facing nozzle and is located in the bed. The bed temperature is controlled by means of a PID regulator. Monitoring of temperature and pressure drop is achieved during operation.
A cyclone (0.09 m in diameter and 0.15 m in height), with a cut diameter of around 10 μm for the operating conditions used in this study, is placed at the fluidized bed reactor outlet. It allows recovering particles larger than its cut diameter (large solid particles or broken beads), while fine particles are entrained in the gas current towards the end of the process.
Then, the gas-particle suspension supplies a second reac tor (calcinator). This reactor, made up of a heat resistant stainless steel tube (0.1 m in diameter and 1.7 m in length), is heated externally by an electrical furnace (8 kW). Prior to filtration, the suspension at the outlet of the calcinator is cooled at 150 °C by a cold air current to avoid any degradation of the filter. The filtration is carried out by a vibrated metallic sleeve filter containing four compartments, each of them including one cartridge. Iron oxide particles are recovered at the filter bottom. This filter was preferred to a cyclone because of its greater efficiency to arrest fine particles (d p > 0.3 μm).
The experimental protocol is as follows. The fluidized bed reactor is charged with the glass beads and then closed. The bed is fluidized with preheated air, to reach the set point temperature. The second reactor is also heated and in parallel the cooling air is fed to its outlet side. In order to reduce the thermal disturbance caused by the liquid atomization, pure solvent (distilled water) is initially sprayed into the fluidized bed reactor at the same flow rate as the precursor solution. When the bed temperature returns to the set value, the distilled water is switched with the iron nitrate solution, and 1.5 kg of solution is sprayed into the reactor. Particles are recovered at the bottom of the cyclone and the filter and are maintained in a desiccator at ambient temperature before being analyzed.
The experiment was repeated 5 times with the same operating conditions in order to verify the synthesis reproducibility.
Muffle furnace
In parallel, solution samples (2 g with the same concentration as the solution used in the fluidized bed process) were placed in glass cups and heated for one hour in a muffle furnace kept at different temperatures (30, 50, 80, 100, 120, 130, 140, 150, 200, 250, 300, 400 and 500 °C).
After the treatment, the samples were removed from the furnace and weighed. Photos were taken and the products were immediately analyzed.
Lab convective dryer
For a better understanding of phenomena occurring at low temperatures, a lab convective dryer was used to dry solution samples at several fixed air temperatures up to 80 °C, which is the maximum temperature in this device.
Hot air flowed parallel to the surface of a sample holder having a disc shape covered by cotton, on which 7 g of solution with the same concentration as indicated above were deposited. The sample holder was placed on a sample port pipe. It was connected to a scale (Metler Toledo France, AT261, precision +/-0.1 mg) in order to continuously register the sample mass, while an infrared pyrometer (Kheller, Cellatemp PQ13AF1, Germany) placed above the sample allowed the temperature of the sample surface to be measured in parallel.
A preliminary study showed that the air flow velocity has a significant effect on the drying kinetics for values lower than 0.5 m/s while the effect can be neglected for higher values. Thus, this parameter was fixed at 0.8 m/s.
Material characterization
The particles' composition was characterized using Fourier transform infrared spectroscopy (FTIR TENSOR 27, Bruker Optic) and Raman spectroscopy (RAMAN labram HR 800, Horiba Yvon Jobin). The crystalline structure was identified by X-ray powder diffractometry (D8 ADVANCE, Bruker). The average crystallite size of the samples, d X-ray , was calculated from X-ray diagrams according to the Debye-Scherrer equation: where λ is the X-ray wavelength (λ = 1.5418 Å in this study), θ, the diffraction angle for the (104) peak, β, the peak width at mid-height and k, the Scherrer constant, equal to 0.9. The size distribution of the particles was measured by means of a dry laser granulometer MALVERN Mastersizer 2000 equipped with a Scirocco. Data were treated with the Mie theory in order to minimize artefacts at very small sizes. The particles were also observed using a scanning electron microscope (SEM-FEG JSM 7100F TTLS, JEOL). The particles' real (skeleton) density was determined using a helium pycnometer (Micromeritics Accupyc 1330TC). Finally, the Brunauer-Emmet-Teller (BET) surface area, pore size distribution, pore volume and average pore diameter were measured using a multigas porosimeter ASAP 2010 M Micromeritics.
Analyses were repeated 3 times to control the reproducibility of the properties.
The precursor was dissolved in water to form a solution having a concentration equal to 66.7 wt.%, i.e. 2 kg of iron nitrate nonahydrate per kg of distilled water, which is 89 % of the saturation concentration at 20 °C. The porosity and the density of this solution at 20 °C are 7.6 × 10 -3 Pa s and 1360 kg m -3 , respectively.
Operating conditions
The operating conditions of the process were fixed as follows.
Concerning the fluidized bed and the calcinator temperatures, we analyzed the variation of the mass loss of the solution samples in the muffle furnace (Fig. 2).
Taking into account the slope changes, four steps can be defined: -between ambient temperature and 80 °C, with a mass loss of 54.4 wt.%, -between 80 °C and 130 °C, with a mass loss of 10.1 wt.%, -between 130 °C and 150 °C, with a mass loss of 19.3 wt.%, -between 150 °C and 500 °C, with a mass loss of 3.2 wt.%.
These zones correspond to different reactions that will be described later.
In order to ensure that the reactions of the three first steps occurred in the fluidized bed reactor, while the last step took place in the calcinator since it needs a high temperature, the fluidized bed and the calcinator temperatures were set at 200 °C and 490 °C respectively. After different attempts, the temperature of the generation reactor was fixed at 200 °C, i.e. a little above 150 °C, to be sure that the third reaction step was achieved before the particles left this reactor, since the residence time in the fluidized bed is shorter than in the muffle furnace. Thus, the first apparatus is the particle generation reactor where an intermediate material is produced, while the second one is the transformation reactor that the final material leaves.
The fluidization air flow rate was equal to 40 m 3 h -1 at 200 °C, the corresponding velocity being equal to 1.6 times the minimum fluidization velocity of the beads which was determined at 0.84 m s -1 . The spraying air flow rate, fixed at 1.6 m 3 h -1 at ambient temperature, was high enough in order to obtain a good dispersion of solution droplets within the fluidized bed. Finally, the mass flow rate of the solution was equal to 0.3 kg h -1 .
Intermediate material
The intermediate material is produced in the solidgeneration reactor. Given the configuration of this reactor, particles cannot be directly sampled in the fluidized bed during experiment. Consequently we analyzed the product recovered at the cyclone bottom. Furthermore, to be sure that the product recovered at the cyclone bottom is representative of the product in the reactor, at the end of the experiment glass beads were removed from the reactor after the process was stopped, and the powder adhered to the beads was collected.
Composition of the product
The intermediate powder, recovered at the cyclone bot-tom, was characterized by infrared spectroscopy, as well as the precursor (Fig. 3a).
Concerning the precursor, one can observe different absorption bands characteristic of nitrate at 825, 1384 and 1764 cm -1 , of OH stretching vibration at 3400 cm -1 , of H-O-H bending stretching vibration at 1635 cm -1 and of atmospheric CO 2 at 2360 cm -1 . As for the product recovered at the cyclone bottom, similar bands are observed. However, the characteristic nitrate band at 1384 cm -1 is narrower than that on the spectrum of iron nitrate nonahydrate. In addition, three bands appear between 800 and 400 cm -1 , respectively at 700, 570 and 460 cm -1 , which are not present on the iron nitrate nonahydrate spectrum. These bands suggest that the product is ferrihydrite (Mazzetti L. and Thistlethwaite P.J., 2002). The product was also analyzed by Raman spectroscopy (Fig. 3b) and the spectrum confirms the nature of the intermediate product. Two forms of this product are mainly proposed in the literature: 2-line and 6-line ferrihydrites. They have similar FTIR and Raman spectra (Mazzetti L. and Thistlethwaite P.J., 2002) but different X-ray diffraction patterns (Majzlan J. et al., 2004). The first one shows two broad XRD peaks and the second one six XRD peaks. The intermediate product was also analyzed by X-ray diffraction (Fig. 3c). Two broad peaks are clearly observed at 34.7° and 62.4°, indicating that it is 2-line ferrihydrite that is produced in the generation reactor.
Finally, some powder fixed on the surface of the glass beads directly sampled in the generation reactor at the end of the experiment was analyzed by infrared spectroscopy and X-ray diffraction. The results confirm that 2-line ferrihydrite was also synthesized within the dry fluidized bed reactor and consequently did not undergo any transformation in the cyclone.
Structure and morphology
The powder collected at the cyclone bottom was analyzed with a laser granulometer and a scanning electron microscope. The particle size distribution of the powder is between 0.2 and 550 μm with a mean diameter (d 50 ) of 31.4 μm (Fig. 4). SEM micrograph of Fig. 5 shows that the powder is made up of: -Large fragments with a size of several micrometers, on which small particles are agglomerated. These fragments constitute the population with sizes higher than the cut diameter of the cyclone (10 μm). -Smaller particles with a size lower than the cut diameter of the cyclone. This population contains small microfragments and nanoparticles, and should not be stopped by the cyclone. Their presence in the sample taken at the cyclone bottom is due to a partial desagglomeration of small particles stuck on large fragments under the effect of the particles' movements within the cyclone. These ephemeral agglomerates were formed according to Van der Waals or electrostatic forces. These different particles populations are formed at the surface of glass beads during solid generation. SEM micrographs of a bead surface are presented in Fig. 6. One can observe particles with a size lower than 1 μm stuck on a compact deposit which is cracked.
To explain the presence of both small particles and solid film on these photos, Fig. 7 shows different schemes explaining the progressive behavior of the precursor solution pulverized on the beads.
During the generation step, the precursor solution is pulverized in the bed of coarse glass beads fluidized by hot air. When the beads pass under the spraying jet, a liquid film is formed around them (a). This thin film is then dried and decomposed, which leads to the formation of nanoparticles of intermediate product on the beads (b). Some of the particles are removed from the bead surface under a friction effect, either in an individual form or in the form of small agglomerates, and they leave the reactor with the air current (c). However, some particles remain adhered to the beads' surface. Prolonged spraying of the precursor solution leads to a liquid deposition on the small particles remaining on the beads' surface or between them (d). The transformation of the solution thus deposited leads on the one hand to a progressive filling of the interstitial spaces between the particles by other nanoparticles, and on the other hand to an increase of their size. These repeated phenomena allow the production of a solid film with a low porosity around the glass beads, and nanoparticles are deposited on this solid film, as it can be seen in Fig. 6. If all these steps are repeated throughout the experiment, the thickness of the solid film is increased (g). Under precursor transformations (drying and decomposition), the film is subjected to cracking, and collisions between beads weaken the solid phase which can be broken and form fragments.
Besides these explanations, one can also think that the residual moisture and the sticky nature of the intermediate solid phase deposited on the beads' surface generate, during collisions, an instantaneous bonding between the beads at their contact points (solid bridge formation). The resistance of the bridges between the beads depends on the deformability of the solid film. Under the effect of the bed agitation created by fluidization, these contacts are broken, extracting a part of the solid bridges and thus forming large fragments. However, if bonding strengths exceed those of breakage imposed by the fluidized medium, the beads will stick to each other, the bed will form a block and fluidization will stop.
In this study, the production of nano-or microparticles in the fluidized bed reactor may be due to: -Attrition caused by friction between the beads. This mechanism leads to the formation of nanometric particles that are removed from the film surface. These particles cannot be retained by the cyclone because of their small size and feed the calcination reactor; -Partial removal of the solid deposit formed by bonding forces between glass beads. The solid bridges between the beads may be broken under the effect of the intense movement of the fluidized bed, causing a detachment of large fragments whose size is sufficient for them to be stopped by the cyclone; -Cracking and fragmentation of the solid film deposited around the beads, due to collisions between them. Under a drying effect, the film can crack, which is observed in Fig. 6a, and fragments larger and more angular than fine particles can be released. Nanoparticles can stick on these fragments, most of them ranging in size from a few micrometers to several tens of micrometers. These fragments, an example of which is shown in Fig. 5, are stopped by the cyclone. In the generation zone of the process, four types of particles are formed: elementary spheroidal grains, small agglomerates of these grains, large and angular fragments, as well as these large fragments on which elementary grains are stuck. These two last populations of particles are mainly stopped by the cyclone, while the two first are essentially carried by the air flow towards the calcination reactor. Note that the size of the fragments can be modified by fragmentation or agglomeration caused by their interactions with the fluidized medium that can act as a ball mill.
Surface properties
The real density of the synthesized ferrihydrite was determined using helium pycnometry. It is equal to 3730 kg m -3 and is close to the value reported by Cornell R.M. and Schwertmann U. (2003) which is 3960 kg m -3 . Fig. 8a shows adsorption-desorption isotherms of the synthesized ferrihydrite. This product exhibits an isotherm of type II with a hysteresis, characteristic of a micro and mesoporous product. From this data, the surface area was determined at 83 m 2 g -1 . According to other works, the specific surface area of ferrihydrite depends on the synthesis method and ranges from 50 to 400 m 2 g -1 (Cornell R.M. and Schwertmann U., 2003;Waychunas G.A. et al., 2005;Qi P. and Pichler T., 2014). The value for our product is in the lowest range. The pore size distribution of the particles is presented in Fig. 8b. Pore diameters vary between 3.3 and 20 nm with a mean diameter equal to 6 nm. Moreover, the pore volume is equal to 0.03 cm 3 g -1 . This latter value corresponds to a low internal porosity of 0.1. These results suggest that particles of the synthesized ferrihydrite are mesoporous.
Final material
3.3.1 Composition of the product Fig. 3a shows not only the FTIR spectra of the precursor and the intermediate product, but also the spectrum of the final product, while its Raman spectrum is presented in Fig. 3b. These two spectra are those of hematite (α-Fe 2 O 3 ). In addition, the final product was analyzed by X-ray diffraction (Fig. 3c). All peaks are indexed to a rhombohedral crystalline phase of hematite (JCPSD card 33-0664). The average crystal size of the hematite nanoparticles was calculated from the dominant peak (104) at 2θ equal to 33.2°, using the Debye-Scherrer equation, and is equal to 47 nm. Zhu M. et al. (2012) synthesized hematite nanoparticles having a crystallite mean size of 51.4 nm by a hydrothermal method. As for Gurmen S. and Ebin B. (2010), they applied ultrasonic spray pyrolysis for the production of hematite particles. The crystallite size of their nanoparticles was between 18 and 33 nm, depending on the operating conditions.
Structure and morphology
The particles recovered at the process outlet were analyzed with a laser granulometer. Fig. 9 shows a narrow particle size distribution, between 200 nm and 2.1 μm, with a mean diameter (d 50 ) of 0.5 μm. Chin S.M. et al. (2014) produced hematite particles through an autocombustion synthesis. The size distribution of their particles was multimodal, ranging between 0.6 and 450 μm. The particles were made up of large agglomerates of nanoparticles with a size from 60 to 140 nm. These authors indicate that drying and dry grinding do not allow individualizing nanoparticles. Ozcelik B.K. and Ergun C. A SEM micrograph of particles recovered at the filter bottom is presented in Fig. 10a. They are made up of spherical nanoparticles mostly individual or weakly ag-glomerated, since the agglomeration tendency is low, under the effect of interparticle forces. The smaller elementary spherical particles are rather compact and their surface is smooth, while the larger ones are nanostructured. Concerning the small particles, one may suggest that they are the result of fine droplets of spray drying in the air before reaching the beads' surface. As for the nanostructured particles, the plastic character of the precursor during its transformation is at the origin of their formation.
The size distribution of the elementary particles was determined using SEM images treatment, by analyzing about 1000 particles (Fig. 10b). Half of the elements have a size lower than 100 nm, which can be related to the average crystal size (47 nm) calculated from X-ray diffraction data in Section 3.3.1. As for the elements having a size higher than 100 nm, one can consider that they correspond to the large nanostructured elements.
Surface properties
Helium pycnometry analysis was used to determine the real density of the synthesized powder which is equal to 4499 kg m -3 . BET results (Fig. 11) show that: -The adsorption-desorption isotherms curves are of type III, which is characteristic of a crystalline product with a low specific surface area. This one was determined from these data and is equal to 24 m 2 g -1 . -The total pore volume is equal to 0.03 cm 3 g -1 .
-The pore size diameters are between 1.8 and 10 nm with a mean pore diameter of 6.5 nm. From these results, an internal porosity, χ, of the formulated product equal to 0.12 was calculated using the equation: where ρ is the real density of Fe 2 O 3 nanoparticles and V pore the specific total pore volume.
Taking into account this porosity and assuming that the agglomerates consist of spherical nanoparticles, the mean diameter of the nanoparticles was calculated by: This mean diameter is estimated at 47 nm and is similar to the crystal size determined in section 3.3.1.
These results suggest that the particles are agglomerates of crystallites. The low value of the internal porosity indicates that a particle can be considered as a consolidated granular medium whose pore network is the space between the crystallites. Indeed, the change in the physical state and the mechanical and thermal stresses exerted by the fluidized bed on the precursor during its transformation may be at the origin of the formation of spherical solid particles. Housaindokht M.R. and Pour A.N. (2011), applying a microemulsion method, obtained a mean pore diameter in the range of 7.1-29.1 nm and a total pore volume of between 0.16 and 0.21 cm 3 g -1 , depending on the operating parameters used in their study. Ahmmad B. et al. (2013), using a biosynthesis method, found a lower pore size distribution, comprised between 3 and 15 nm with a total pore volume of 0.03 cm 3 g -1 .
In the dry fluidized bed process used in this study, significant forces are imposed, which lead to the synthesis of spherical solid particles. These particles are constituted of agglomerates of crystallites held together by bonding forces. These agglomerates are rigid and ordered under the effect of external forces.
Regarding the production efficiency, it is equal to the ratio between the mass of the powder recovered at the filter bottom and the mass expected from conversion of the precursor injected into the process. For the operating conditions fixed in this study, it is about 75 %. Indeed, a small amount remains in the fluidized bed (on the beads' surface) while the main lost powder is stopped by the cyclone. This efficiency depends, on the one hand, on the surface properties of the glass beads, and, on the other hand, on the fluidized bed movement which is strongly impacted by the operating conditions. In particular, a higher fluidization velocity enhances the mixing of the beads in the fluidized bed, leading to a decrease of the solid film thickness around the beads and of the size of the fragments removed from the beads' surface. This will be analyzed in detail in a further study.
Reaction mechanisms
To our knowledge, no study is presented in the literature on the reaction mechanism of the thermal decomposition of an iron nitrate solution. Few works based on thermogravimetric and differential thermal analyses were done on the mechanism of thermal decomposition of iron nitrate nonahydrate powder into hematite. However, the authors do not agree on the nature and the number of molecules removed during the first reactions, or on the intermediate material formed: -Removal of 9 molecules of water to form anhydrous iron nitrate (Keely W.M. and Maynor H.W., 1963;Mu J. and Perlmutter D.D., 1982;Pereira Da Silva C. et al., 2018); 7 molecules of water (Gadalla A.M. and Yu H.F., 1990;Elmasry M.A.A. et al., 1998;Melnikov P. et al., 2014); 6 molecules of water and 1 molecule of nitric acid (Erri P. et al., 2004;Wieczorek-Ciurowa K. and Kozak A.J., 1999;Tong G. et al., 2010); -Intermediate material: Fe 2 O 3 ·3H 2 O (Gadalla A.M. and Yu H.F., 1990), Fe(OH) 3 (Elmasry M.A.A. et al., 1998) or FeOOH (Erri P. et al., 2004;Wieczorek-Ciurowa K. and Kozak A.J., 1999;Tong G. et al., 2010). Other authors suggested ferrihydrite as the intermedia te material but they did not propose any reaction mechanism (Bødker F. et al., 2000;Oliveira A.C. et al., 2003;Rzepa G. et al., 2016). The reason may be that the ferrihydrite formula is not well defined.
The thermal decomposition of the solution of iron nitrate nonahydrate involves various reactions in four steps, as seen in Fig. 2. The study on the reaction mechanism is presented below.
The process configuration does not allow taking samples over time and at different temperatures within the solid generation reactor. In addition, setting the temperature of the fluidized bed at a value lower than 100 °C leads to a risk of caking of the bed due to a too slow drying of the solution and creation of solid bridges between the beads. Consequently, it was decided to carry out the study using the muffle furnace. This work was completed by experiments carried out in the lab convective dryer described in Section 2, in order to define more precisely the phenomena occurring at temperatures lower than 100 °C.
Results obtained with the muffle furnace
The 13 samples heated in the muffle furnace at different temperatures were all analyzed by FTIR spectroscopy. Some spectra are presented in Fig. 12. The product obtained after the third step and the final product were also analyzed by X-ray diffraction (no diagram presented) to control that they are ferrihydrite and hematite, respectively. Moreover, some photos of the samples are shown in Fig. 13.
Studying the first step of the mechanism between ambient temperature and 80 °C, one can say from the two first photos and the two first FTIR spectra that the solution dries progressively as the temperature increases and its color changes. A solid film is formed at the sample surface. Since the two FTIR spectra are similar, no reaction occurs. Thus the mass loss of 54.4 wt.% of the initial solution sample is only due to the removal of water. This mass loss corresponds to the elimination of the water added to the precursor powder to make the solution, and of seven molecules of water of hydration of iron nitrate nonahydrate (dehydration). Consequently, after this step, the sample is made up of iron nitrate dihydrate.
Concerning the second step of the mechanism between 80 and 130 °C, the samples become pasty and brown. On the FTIR spectra, the width of the nitrate peak at 1384 cm -1 is reduced, while three bands with a low intensity appear between 800 and 400 cm -1 . The calculated mass loss of 10.1 wt.% of the initial solution sample corresponds to the elimination of one molecule of nitrate.
As for the third step between 130 and 150 °C, with a mass loss of 19.3 wt.% of the initial solution sample, the nitrate peak on the FTIR spectrum becomes narrower, indicating that nitrate is still eliminated, and the bands between 800 and 400 cm -1 continue to evolve, to come close to those of ferrihydrite. The samples are dark brown and under a powder form.
Finally, the fourth step ranges between 150 and 500 °C, with a mass loss of 3.2 wt.% of the initial solution sample. The powder color changes from dark brown to red. The nitrate peak has totally disappeared and the bands between 800 and 400 cm -1 have significantly changed to match those of hematite. Thus the residual nitrate is removed, as well as the two last molecules of water, called hydroxylation water. During this step, ferrihydrite is transformed into hematite.
Results obtained with the convective dryer
During the first step in the muffle furnace, more than half of the solution mass is lost. In order to understand how the molecules of water are removed, the lab convective dryer described in Section 2 was used to dry the solution between 40 and 80 °C.
Water drying
A first series of experiments was performed with distilled water in order to have a reference. Fig. 14 shows the variation with time of the water mass, the temperature of the water surface and the drying rate for an air temperature fixed at 80 °C. Three zones are observed on the figure: -Prior to 380 seconds, the water sample is heated by hot air, and its temperature increases rapidly. This leads to an increase in equilibrium vapor pressure (i.e. saturating vapor pressure). This first zone is called initiation period. -Between 380 and 3000 seconds, the water temperature is constant and equal to 48.8 °C, the mass decreases linearly and the drying rate is constant. Water is evaporated, and the water molecules have a low binding energy. The evaporation enthalpy of water is due to the energy supplied by hot air. The partial pressure of water vapor at the surface of the wet cotton (equilibrium pressure) is that of the saturating vapor of water at the sample temperature. This second zone is called drying period at constant rate. -Above 3000 seconds, when almost all the weakly bound water is eliminated, the cotton temperature increases rapidly to reach the set point value fixed at 80 °C. An increase in binding forces between the last water molecules and cotton, as well as a combination of convective transport and resistance of water transfer from the cotton inside to the surface lead to a decrease of the water partial pressure at the surface (equilibrium pressure), and consequently to a slowdown of the evaporation rate. The supplied heat is higher than that necessary for evaporation, and thus the temperature of the cotton surface increases to reach a new equilibrium state at which the cotton temperature becomes equal to that of the hot air. This third zone is called drying period at decreasing rate. For other experiments on water drying, the air temperature was fixed at 40 and 53 °C. Similar curves were obtained, with a temperature of the cotton surface in the second zone at 27.8 and 34.8 °C respectively, i.e. the sample temperature increases with air temperature. Furthermore, when increasing this parameter, the drying rate is increased and the evaporation time is reduced.
Solution drying
The precursor solution was then dried at different air temperatures in the convective dryer. Fig. 15 shows the influence of this parameter on the mass and the temperature of the sample surface during drying.
Different points are added on the curves, defining the mass loss corresponding to the removal of the solution water, of the solution water + 2 molecules of water of hydration in iron nitrate nonahydrate, of solution water + 4 molecules of water of hydration, and of solution water + 7 molecules of water of hydration. This last point is present only for the two highest temperatures since it was not possible to reach this level at 40 and 53 °C.
Based on these results and on the evolution of the drying rates (curves not presented), it was possible to delimit Variation of (a) the mass and (b) the temperature of the sample surface during the treatment. 5 zones (Fig. 15b) whose time ranges decrease when the air temperature increases. Zone 1 is characterized by a rapid increase of the sample temperature up to a value depending on the air temperature, as well as a rapid increase of the evaporation rate. The mass loss is around 3 wt.% of the initial solution mass, whatever the air temperature. As for water drying, this zone is an initiation period.
Zone 2 is still characterized by an increase of the sample temperature, but this increase is more moderate than in zone 1. Moreover, the evaporation rate is constant. In this zone, called drying period at constant rate, the water removal leads to an increase of iron nitrate concentration in the sample, and consequently to an increase of the surface temperature. The mass loss during this period is around 25 wt.% of the initial solution mass for all the air temperatures. After the two first periods, the concentration of iron nitrate in the solution had increased from 0.67 to 1.27 g per g of water. This concentration corresponds to the elimination of 85 wt.% of the added water of the solution. At 40 °C, it is equal to 73 % of the saturation concentration of iron nitrate.
In zone 3, the increase in the surface temperature is slightly higher than in zone 2, while the evaporation rate decreases. During this period of drying at decreasing rate, the mass loss is about 11 wt.% of the initial solution mass for all the air temperatures. At the end, all the water used to form the solution is evaporated, as well as 2 molecules of water of hydration, and the concentration of iron nitrate in the solution still increases. For example, it reaches 1.1 times the saturation concentration at 40 °C. The chemical formula of the product obtained at this stage is Fe(NO 3 ) 3 ·7H 2 O. The decrease in the evaporation rate may be explained by the fact that the high iron nitrate concentration may modify the medium consistency and thus the resistance of water transfer from the medium inside to its surface. The product becomes viscous, and one can even suggest that at low temperatures, a crystallization/precipitation phenomenon begins since the iron nitrate concentration is higher than the saturation concentration.
In zone 4, the sample temperature rises very slowly to reach the set point. The mass loss in this zone is about 15 wt.% and corresponds to the elimination of the 5 other molecules of water of hydration, leading to iron nitrate dihydrate, Fe(NO 3 ) 3 ·2H 2 O. The evaporation rate decreases compared to the previous steps, due to a higher viscosity of the medium and the probable presence of solid particles that slow down water migration in the medium. This corroborates the observation of the solid film observed on the sample surface treated at 80 °C in the muffle furnace. The elimination of these 5 molecules of water is thus a slow phenomenon.
Finally, the temperature in zone 5 is constant while the mass still decreases due to the onset of the elimination of HNO 3 already discussed when analyzing the results of the muffle furnace.
It can be observed in Fig. 15 that the different phenomena described above are accelerated by an increase in temperature. Indeed, the heat supplied to the sample increases, and so does the saturating vapor pressure of water. Thus, the drying kinetics is enhanced, and the time ranges of the various zones are reduced.
The total mass lost during the 4 periods of water elimination in the convective dryer is 54 wt.% of the initial solution mass. It is similar to the mass lost during the first step of the mechanism in the muffle furnace (54.4 wt.%).
Conclusions
Hematite nanoparticles were synthesized in a compact and simple process where a solution of iron nitrate nonahydrate was pulverized, dried and transformed in a fluidized bed reactor containing glass beads to produce particles that were then submitted to calcination in an entrained bed reactor. Different analytical techniques have made it possible to characterize the produced materials, to determine their properties and to propose a reaction mechanism.
An intermediate product called ferrihydrite is produced in the fluidized bed reactor. The product leaves the reactor in different forms: on the one hand a solid film which coats the beads is cracked during drying and is fragmented under the effect of high stresses caused by the movement of the fluidized bed, to generate micron-sized particles that are retained by the cyclone, and on the other hand nanoparticles that are removed from the glass beads' surface to be entrained in the calcination reactor where they are transformed into hematite (α-Fe2O3). These particles are individual or hardly agglomerated. Their size distribution is narrow and monomodal (between 0.2 and 2.1 μm). The particles produced are small agglomerates of crystallites whose size is about 47 nm. They can be considered as a consolidated porous granular medium because of the low internal porosity (0.12).
The heat treatment of the solution of iron nitrate nonahydrate in a muffle furnace and in a lab convective dryer has allowed the proposal of a reaction mechanism defined in different steps. First, the water added to make the solution is lost as well as two molecules of water initially contained in the precursor. This elimination is rapid and leads to Fe(NO 3 ) 3 ·7H 2 O. Then, five other molecules of precursor water are removed slowly to form iron nitrate dihydrate. These two steps occur below 80 °C. Between 100 and 200 °C, iron nitrate dihydrate is decomposed into ferrihydrite with the elimination of nitric acid and nitrate. These different steps take place in the fluidized bed reactor from which ferrihydrite particles are entrained by the fluidizing air. Finally, above 200 °C, ferrihydrite is transformed into hematite in the calcination reactor, with a removal of the residual nitrate and of hydroxylation water. | 9,608.2 | 2020-01-10T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
Removal of Lead Ion from Industrial Effluent Using Plantain (Musa paradisiaca) Wastes
Introduction: Industrial effluent often contains heavy metals which bio-accumulate in biological systems and persist in the environment, thereby constituting public health problems. Plantain (Musa paradisiaca) wastes, which are easily available, could be used to produce resource materials such as activated carbon that are of economic importance. Aims: This study assessed the use of plantain wastes for the removal of lead in effluent from battery recycling plant. Methodology: Plantain wastes were collected from a plantation, sun-dried and ground. These were then carbonized and activated using industrial oven at 400C. An Acid-Lead battery recycling plant in Ogunpa, Ibadan, Nigeria was purposively selected. Samples of effluent from the point of discharge into Ogunpa River (100 m from the residential area) were subjected to physico-chemical (pH, conductivity, Total Suspended Solid (TSS), and Lead (Pb)) analyses, using American Public Health Association methods. Batch experiment was adopted in determining the adsorption isotherms of the adsorbents, using Association of Official Analytical Chemist method at varied effects of pH (2 to 12) and adsorbent doses (0.1 to 2.0 g) with treatments of Plantain Peel Activated Carbon (PAC), Plantain Leaf Activated Carbon (LAC), Plantain Bract Activated Carbon (BAC), Original Research Article Okareh and Adeolu; BJAST, 8(3): 267-276, 2015; Article no.BJAST.2015.205 268 Plantain Stalk Activated Carbon (SAC) while Commercial Activated Carbon (CAC) served as control. Initial and final concentrations of Pb were determined by Atomic Absorption Spectroscopy. Results: Means of pH, conductivity, TSS, and Pb of the effluent sample were: 2.0±0.2, 2164.7±0.6 μs/cm, 2001.7±25.2 mg/l, 31.3±0.0 mg/l. The highest quantities (94.97%) of Pb were removed at pH. Optimum dosage of Pb occurred at 1.5 g. Conclusion: Carbonized and activated plantain waste used as adsorbents had potentials for effectively removing lead from effluent generated from battery recycling plant. Treatment of effluent with plantain wastes should be encouraged in battery recycling plant for improved public health and safety status, and to enhance effective waste management.
INTRODUCTION
Over two-thirds of earth's surface is covered by water; less than a third is taken up by land. When earth's population was much smaller, no one believed pollution would ever present a serious problem. As earth's population continues to grow, people are putting ever-increasing pressure on the planet's water resources [1,2,3]. As industrialization has spread around the globe, so the problem of water pollution has spread with it. It was once popularly believed that the oceans were far too big to pollute. Today, with over 8 billion people on the planet, it has become apparent that there are limits and water pollution is one of the signs that humans have exceeded those limits. Water pollution has always been a major problem to the environment.
Water pollution is the process of contamination of streams, lakes, underground water, bays or oceans by substances harmful to living things. Water pollutants may occur naturally or through anthropogenic means. A variety of industries are responsible for the discharge of heavy metals into the environment through their waste water [4]. Some of these water pollutants, specifically heavy metals, are of serious health implications because of their persistence and bioaccumulation potential in the environment. For example, Lead is a metal ion toxic to the human biosystem, and is among the common global pollutants arising from increasing industrialization. The assimilation of relatively small amounts of lead over a long period of time in the human body can lead to the malfunctioning of the organs and chronic toxicity [5].
Based on the increasing rate at which heavy metals are being generated and discharged into water bodies through various industrial applications, as well as the known deleterious effects of these heavy metals, there is a need to proffer a cost-effective solution to heavy metals removal in polluted water. Over the years, a wide range of techniques for removing heavy metals from polluted water have been developed, some of which are precipitation, ion exchange, adsorption, electro-dialysis and filtration. Studies on the treatment of effluent bearing heavy metal have revealed adsorption to be a highly effective technique for the removal of heavy metal from waste stream and activated has been widely used as an adsorbent [6].
Despite its extensive use in the water and waste water treatment industries, activated carbon remains an expensive material. Therefore, the quest for cost effectiveness has formed the basis for the use of low cost agricultural wastes, which are produced in excessively large quantities, in heavy metals adsorption from effluent. Therefore, there is an urgent need to explore some agrobased inexpensive adsorbents and their feasibility for the removal of heavy metals. Plantain wastes, which are easily available and often constitutes nuisance to the environment could be used to produce resource materials such as activated carbon commonly used as filter in water treatment systems. Therefore, this study assessed the use of plantain wastes in the removal of lead ion in effluent from battery recycling plants.
Plantain Wastes
Ripe plantain and fruit stalk were collected in market within Ibadan in Oyo State while plantain bract and leaves were collected from plantation in Osun State, Nigeria. The peel and fruit stalk were removed, washed with distilled water and sun dried for 168 hours and then oven dried at 45ºC to constant weight. The samples were ground, passed through a 0.14 mm mesh size and, stored in polythene container for analysis. Four types of adsorbents, Plantain Peel Activated Carbon (PAC), Plantain Leaf Activated Carbon (LAC), Plantain Bract Activated Carbon (BAC), Plantain Stalk Activated Carbon (SAC) and Commercial Activated Carbon (CAC) which served as control, were used as the media (adsorbent) to remove lead ion in wastewater.
Effluent Collection
Effluent was collected from the point of discharge into Ogunpa River into a 5-litre plastic bottle from Acid-Lead battery recycling plant, Ogunpa, Ibadan North-West Local Government, Oyo State. Containers used for sample collection were pre-treated by washing with dilute hydrochloric acid and later rinsed with distilled water. The containers were later dried in an oven for one hour at 105ºC and allowed to cool to ambient temperature. At the collection point, containers were rinsed with samples 3 times and then filled with sample, corked tightly, and taken to the laboratory for treatment and analysis. The method of analysis was consistent with the standard methods [7].
The pH of the sample was taken at the site and other parameters were measured in the laboratory. Samples were stored at below 4ºC to avoid any change in physico-chemical characteristics.
An atomic absorption spectrophotometer was used: model Phillip PU 9100 × with a hollow cathode lamp and a fuel rich flame (air acetylene). Each sample was aspirated and the mean signal response recorded at the metal ion's wavelength. This was used to compute the concentrations of metal ions absorbed by the adsorbents.
Carbonization and Activation of Plantain Wastes
The samples were carbonized and activated by the two steps method. A quantity (100 g) of each raw ground plantain waste samples was prepared by drying at 400ºC for 1hr under a closed system in a porcelain crucible and then cooled to room temperature. The charcoal was subjected to H 3 PO 4 activation. The charcoal was agitated in H 3 PO 4 after the agitation; the precarbonized charcoal slurry was left overnight at room temperature and, then, dried at 110ºC for 24 hours. The samples were activated in a closed system. Consequently, the samples were heated to optimize temperatures of 400ºC and maintained at a constant temperature for 1hour before cooling. After cooling down, the activated charcoal was washed successively several times with distilled water to remove the excess activating agent and impurity. The prepared activated carbons were then subjected to various analyses.
Commercial activated carbon
The commercial carbon, Calgon carbon (F-300), was obtained from Calgon Carbon Inc., Pittsburgh, PA, USA.
Effect of pH
Volume of 50 cm 3 of effluents was measured into each 250 ml conical flask at adjusted pH of 2, 4, 6, 8, 10 and 12. The desired pH was maintained using concentrated NaOH to adjust the pH. Activated carbon (1.0 g) was added into each flask and agitated intermittently for the desired time periods. The mixture was shaken thoroughly at 200 rpm with an electric shaker for 90 minutes to attain equilibrium. The suspension was filtered using Whatman No. 1 filter paper to remove suspended adsorbent. Initial and final concentrations of lead ions were analyzed for the lead ions using an Atomic Absorption Spectrophotometer, AAS (Computer-Aided Scalar Series, Model 969) while the amount adsorbed was calculated by difference.
Effect of contact time
The procedure was repeated for optimum contact time in which 50 ml of effluent was measured into each 250 cm 3 conical flask at optimum pH (obtained from experiment above). Activated carbon (1.0 g) was added into each flask and agitated intermittently for the desired time periods. The mixture was shaken thoroughly at 200 rpm with an electric shaker for 30, 60, 90, 120 and 150 minutes to attain equilibrium. The suspension was filtered using Whatman No. 1 filter paper to remove suspended adsorbent. Initial and final concentrations of lead ions were analyzed for the lead ions using an AAS, while the amount adsorbed was calculated by difference.
Effect of adsorbent doses
The procedure was repeated for optimum adsorbent dose in which 50 ml of effluent was measured into each 250 cm 3 conical flask at optimum pH. A known amount of activated carbon 0.1, 0.5, 1.0, 1.5 and 2.0 g was added into each flask and agitated intermittently for the period of 90 minutes. The mixture was shaken thoroughly at 200 rpm with an electric shaker for optimum contact time to attain equilibrium. The suspension was filtered through Whatman No. 1 filter paper to remove any suspended adsorbent. Initial and final concentrations of lead ions were determined by AAS, while the amount adsorbed was calculated by difference.
Effect of initial metal ion concentration
The stock solution of 1000 mg/L of the standardized Pb 2+ was prepared from the chloride using effluent sample. The solution was adjusted to pH 6.0±0.2 with 0.1 M HCl. Batch sorption experiments were performed in which 50 ml of effluent was measured into each 250 ml conical flask and 1.0 g of the adsorbent was added into each flask and agitated intermittently for the desired time periods. The mixture was shaken thoroughly at 200 rpm with an electric shaker for 90 minutes to attain equilibrium. The suspension was filtered through Whatman No. 1 filter paper to remove any suspended adsorbent. Initial and final concentrations of lead ions were determined by AAS, while the amount adsorbed was calculated by difference.
A summary index was used to determine the mean values of results obtained from various parameters. One-way ANOVA was used to test for significant differences in characteristics of the prepared plantain wastes. The results were investigated by using the least significant difference at a 95% confidence level using SPSS 16.
Adsorption Isotherms
A series of batch experiments was carried out to determine the adsorption isotherms of lead ion on the adsorbents [6].
The amount of metal ion adsorbed (Q e ) during the series of batch investigations was determined using a mass balance equation: Where Q e is the metal uptake (mg/g); C v and C f are the initial and final metal equilibrium concentration in the effluent sample (mg/l) respectively, M is the mass of the adsorbent (g) and V is the volume of the effluent sample (l).
The definition of removal efficiency is as follows: Removal efficiency (%) = 100 [8] Where C v and C f are the initial and final metal equilibrium concentration in the effluent sample (mg/l) respectively. High or low pH values in a river have been reported to affect aquatic life and alter toxicity of other pollutants in one form or the other [9]. The mean pH value of the effluent indicated that the effluent was highly acidic than pH 6-9 of NESREA recommended limits for battery factory effluents ( ] in the effluent, sourced mainly by sulphuric acid-one of the major raw materials in lead acid battery manufacture.
Physico-chemical Characteristic of the Effluent
Temperature is basically important for its effect on other properties of wastewater. The mean temperature value of the effluent is within the range of NESREA limit of 30ºC. The averaged Total Suspended Solids (TSS) value of the effluent is very high. Conductivity of water is a measure of the total concentration of ionic species or salt content. The mean conductivity value of the effluent is very high. The high conductivity value (i.e high salt concentration) in the effluents can increase the salinity of the receiving river, which may result in adverse ecological effects on the aquatic biota and such high salt concentrations hold potential health hazard [10].
Lead is a suspected pollutant in a battery recycling effluent because it is a major raw material in the manufacture of lead acid accumulated batteries. Lead at very low concentration is toxic and hazardous to most forms of life. The chronic effect of Pb on man includes neurological disorders, especially in the foetus and in children. This can lead to behavioural changes and impaired performance in IQ tests [11,12]. The Pb level was exceeded in the effluent being studied; the direct use of water from this receiving river (Ogunpa) for domestic purposes without treatment could be detrimental to young children in the vicinity of the catchment ( Table 1). The receiving river also would not be suitable for maintenance of the aquatic ecosystem.
Characteristics of the Adsorbents
The bulk density is important physical parameter because it determines the mass of carbon that can be contained in a filter of given solids capacity and the amount of treated liquid that can be retained by the filter cake [13]. Activated carbon with high bulk density has ability to filter more liquor volume before available cake space is filled. It equally has a low amount of ash (Table 2). Ash content reduced the overall activity of activated carbon and the efficiency of reactivation. The lower the ash value therefore the better the activated carbon for use as adsorbent.
Surface Chemistry of the Adsorbents
Iodine number is a fundamental parameter used to characterize activated carbon performance by measuring surface area. It is a measure of the micro-pores content of the activated carbon and is obtained by the adsorption of iodine from solution by the activated carbon sample. The micro-pores are responsible for the large surface area of activated carbon particles and are created during the activation process. The surface areas of the adsorbents are higher when compared to commercial activated carbon ( Table 2). The surface titration method stipulates that only strongly acidic carboxylic groups are neutralized by sodium bicarbonates (Na 2 HCO 3 ), whereas those neutralized by sodium carbonate (Na 2 CO 3 ) are thought to be lactonic and carboxylic group. The weakly acidic phenolic groups only react with strong alkali, sodium hydroxide (NaOH). Neutralization with hydrochloric acid (HCl) characterizes the amount of surface basic group's pyrones and chromenes that are present in the activated carbon [14,15]. The values of surface chemistry of the adsorbents are almost the same when compared to commercial activated carbon ( Table 2).
Operating Conditions
These are conditions necessary for the removal of heavy metals from the effluent. They include pH, contact time, adsorbent dose, and initial ion metals concentration. The effect of these conditions on the removal of heavy metals was studied and recorded.
Effect of pH
The pH of the aqueous solution is an important controlling parameter in the adsorption process and thus the role of H + concentration was examined from samples at different pH covering a range of 2-12. Fig. 1a shows that percentage removal of Pb (II) increased at a steady rate as pH increased up to 10, attaining a maximum value of around 95.58%. Also, the adsorption capacity (Q e ) of Pb (II) increased at a steady rate as pH increased up to 10, attaining a maximum value of 1.98 (Fig. 1b).
It should be noted that after pH 10, there was a decrease in the adsorption, which may be due to the formation of soluble hydroxyl complexes. At low pH values, the surface of the adsorbent would be closely associated with hydroxonium ions (H 3 O − ) by repulsive forces, to the surface functional groups, consequently decreasing the percentage removal of metal [16]. As the solution pH increases, the onset of the metal hydrolysis, the precipitation began and the onset of adsorption therefore occurs before the beginning of hydrolysis [8]. When the pH of the adsorbing medium was increased from 2 to 10, there was a corresponding increase in deprotonation of the adsorbent surface, leading to a decrease in H + ion on the adsorbent surface. This creates more negative charges on the adsorbent surface, which favours adsorption of positively charge species and the positive sites on the adsorbent surface [5].
The hydrolysis of cations by the replacement of metal ligands in the inner coordination sphere with the hydroxyl groups; this replacement occurs after the removal of the outer hydration sphere of metal cations. In addition, increasing pH decreases the concentration of H + , therefore reducing the competition between metal ions and protons for adsorption sites on the particle surface. Another factor that contributes to enhancing metal ion adsorption is the increasing pH which encourages metal ion precipitation from the solution in the form of hydroxides. The adsorption of the metal ions into plantain waste adsorbent was largely influenced by pH.
Effect of contact time
The result showed that for all adsorbents the removal rate was rapid within the first 30 minutes, sharply increased for 60 minutes and gradually increased between 90 and 150 minutes for Pb (II). The Fig. 2a shows that percentage removal of Pb (II) increased at a steady rate as contact time increased up to about 150 minutes, attaining a maximum value of around 95.78%. The adsorption capacity (q e ) of Pb (II) increased at a steady rate as contact time increased up to 150 minutes, attaining a maximum value of 1.50 (Fig. 2b).
The initial faster rate was due to the availability of the uncovered surface area of the adsorbents, since the adsorption kinetics depends on the surface area of the adsorbents. The lead adsorption takes place at the more reactive sites. As these sites are progressively filled the more difficult the sorption becomes, as the sorption process tends to be more unfavourable [7].
Effect of adsorbent dose
The values were generated by varying the adsorbent doses (0.1 to 2.0 g) at room temperature with different adsorbents. The result suggested that after a certain dose of adsorbent the maximum adsorption sets in and hence the amount of ions bound to the adsorbent and the amount of free ions remain constant even with further addition of the dose of adsorbent.
The Fig. 3a above showed that percentage removal of Pb (II) increased at a steady rate as adsorbent dose increased up to about 1.5 g for Pb (II), attaining a maximum value of 96.80%. The adsorption capacity (q e ) of Pb (II) decreased at a steady rate as adsorbent dose increased up to 1.5 g, attaining a minimum value of 0.60 (Fig. 3b). The increased percentage adsorption by adsorbent was as a result of increased surface area and increased adsorption site occasioned by increased adsorbent dose [17]. The observed decrease in adsorption capacity may be due to decrease in liquid-solid ratio. The change in the solid-liquid ratio may have directly resulted in this trend since amount adsorbed, qe, has an inverse proportionality function to weight of biosorbent, but a direct proportionality function to percentage adsorbed [18].
Effect of initial metal ion concentration
The result showed that percentage removal of Pb (II) increased at a steady rate as initial concentration increased up to 500 mg/L, attaining a maximum value of 99.21%. The adsorption capacity (q e ) of Pb (II) increased at a steady rate as initial concentration increased up to 500 mg/L, attaining a maximum value of 24.80. Details are shown Fig. 4a and 4b.
CONCLUSION
The study confirmed that the discharge of untreated effluent from battery recycling plant, containing heavy metals and hazardous materials such as lead, over a long period of time into the receiving water body is a potential source of contamination. The study revealed that the background level of lead in the effluent was 31.25 mg/l, which was above the recommended limits of NESREA. Plantain wastes have been successfully used to produce high quality From the analysis of the result, it is evident that the plantain wastes, processed into activated carbon have great potential for the uptake of lead from the industrial effluent. It has also been observed that the sorption capacity of activated carbon made from plantain wastes depends on conditions such as pH, initial metal ion concentration, adsorbent dose and contact time. Plantain wastes, which constitute nuisance to the environment, could be used to produce resource materials (activated carbon), which can be used to remove heavy metals from the industrial effluent that are of public health concern. Converting these wastes into activated carbon will greatly help to reduce its menace in the environment and enhance effective waste management. | 4,943.4 | 2015-01-10T00:00:00.000 | [
"Engineering"
] |
Development of a deformable lung phantom for the evaluation of deformable registration
A deformable lung phantom was developed to simulate patient breathing motion and to evaluate a deformable image registration algorithm. The phantom consisted of an acryl cylinder filled with water and a latex balloon located in the inner space of the cylinder. A silicon membrane was attached to the inferior end of the phantom. The silicon membrane was designed to simulate a real lung diaphragm and to reduce motor workload. This specific design was able to reduce the use of metal, which may prevent infrared sensing of the real position management (RPM) gating system for four‐dimensional (4D) CT image acquisition. Verification of intensity based three‐dimensional (3D) demons deformable registration was based on the peak exhale and peak inhale breathing phases. The registration differences ranged from 0.60 mm to 1.11 mm and accuracy was determined according to inner target deformation. The phantom was able to simulate features and deformation of a real human lung and has the potential for wide application for 4D radiation treatment planning. PACS number: 87.57.Gg
I. IntroDuctIon
Adaptive radiation therapy (ART) is a replanning procedure that conforms to organ changes that occur due to breathing, weight loss, or tumor shrinkage during the course of treatment. Various techniques have contributed to the success of ART including deformable registration, automatic segmentation, and dose accumulation. (1)(2) However, verification of these techniques is a difficult task for the quantification of anatomic variation and requires the use of a competent deformable phantom.
Several investigators have sought to develop a reproducible, deformable, tissue-equivalent phantom to perform accurate assessment of respiratory-correlated movements. We have previously fabricated a phantom to simulate organ motion using simple moving targets (3)(4) and an inflatable balloon. (5) However, for an accurate clinical application, a phantom is needed to represent real organ structures and densities.
Nioutsikou et al. (6) have developed a sophisticated deformable phantom for lung tumor dosimetry. An accordion-type flexible bottle that simulated a lung was described where dosimetric film was inserted. Kashani et al. (7) have designed a tissue-equivalent deformable lung phantom using a commercial diagnostic thoracic phantom as the main shell and high density foam with tumor-simulating insets as the inner lung material. The investigators used this phantom to verify the use of the thin-plate spline deformable registration technique. (8) However, this phantom was insufficient to simulate a real lung and tumor deformation and, consequently, the phantom had a a limited ability to provide accurate deformable registration. Serban et al. (9) have developed an anthropomorphic lung phantom that consisted of a latex balloon with dampened sponges and a piston that simulated the thoracic cavity and diaphragm. The balloon and water in the space around the balloon were compressed and were decompressed by a piston that was fastened to a programmable motor. This tissue-equivalent phantom simulated real lung deformation according to breathing patterns. However, a certain amount of metal was used to support the phantom and to exert a large force on the piston without water leakage. The metal may cause a metal artifact on CT scans and may prevent infrared sensing of the real position management (RPM) gating system for four-dimensional (4D) CT image acquisition.
We have also developed an anthropomorphic deformable lung phantom by the use of a latex balloon containing various elastic materials and a silicon membrane. The elastic materials were used as landmarks for deformable registration. The silicon membrane that simulates the diaphragm serves as a medium to improve motor efficiency and minimize metal use.
The demons deformable registration algorithm has been used to register lung CT scans. Guerrero et al. (10) have applied a three-dimensional (3D) optical flow method to validate intrathoracic tumor motion estimation. Wang et al. (11) have used an accelerated demons algorithm and have evaluated the algorithm for prostate, head-and-neck, and lung cases. Wu et al. (12) reported an average alignment error reduction of 2.07 mm using the demons algorithm with boundary-matching penalty on 4D CT data sets. In this study, validation of demons deformable registration was performed based on distance and vector differences between 3D reference and deformed image sets. We have used a phantom for verification of the demons deformable registration algorithm.
A. Experimental setup
The anthropomorphic lung phantom shown in Fig. 1(a) was designed to simulate real lung deformation. The phantom consisted of an acryl cylinder filled with water. A latex balloon was located inside the inner space of the cylinder. The diameter and length of the acryl cylinder were 22 cm and 24 cm, respectively. The latex balloon simulated the thoracic cavity, and the balloon was filled with sponges and targets where the targets were embedded in the sponges. Each target was made of a deformable water balloon and silicon and a rigid glass ball.
A silicon membrane was attached at the inferior end of the phantom. The silicon membrane was designed to simulate a real lung diaphragm and to reduce the motor workload. This specific design was able to reduce the motor size of the phantom. An acryl plate was attached to the driving rod to distribute power uniformly to the diaphragm. Since the diving rod was attached the wheel of the motor as shown in Fig. 1(b), the strain gauge of the real-time position management (RPM) system can be used to obtain a series of CT images. The motor was programmed for regular breathing patterns with various breathing periods. When power was applied to the diaphragm, the water in the phantom pushed a latex balloon and the air in the balloon flowed out of air vents (Fig. 2). A balloon guide and fixing rod were designed to guide the balloon to a designated location. The device was designed to improve phantom reproducibility.
CT image sets were acquired using the RPM respiratory gating system from Varian Medical Systems (Varian, Palo Alto, CA USA) and a Philips CT scanner (Philips, Milpitas, CA USA). The image resolution was 0.7 × 0.7 × 2.5 mm 3 and a total of ten 3D CT image sets were obtained according to ten different phases.
B. Demons deformable registration
For this study we have used the intensity-based demons deformable registration algorithm. (13) The demons algorithm is suitable for deformable registration for images acquired with the same modality, as the algorithm assumes that the voxels are homologous by representation of the same points that have equal intensity on both images. The 3D variant demons algorithm implemented in the Insight Toolkit (ITK) (14) was used to calculate a deformation grid. More specifically, before processing an image fusion, the CT image sets were converted to the 3D mhd file format (i.e. a meta image header file) with the same pixel spacing and thickness. The demons algorithm is based on gradient calculations from both static and moving images to determine the demons force. The displacement between images D is calculated from (1) where m and f are moving and fixed images, respectively, and and are gradients of the fixed and moving images, respectively. Linear interpolation for the 3D resampling procedure was used to compute values at non-integer positions for the original CT image.
Exhale data in 3D CT image sets was the moving model and deformable registration was performed to align the moving model to the static reference model (the inhale data set). Registration accuracy was measured with the center of gravity (COG) parameter. The volume of targets was defined separately on moving image, and reference image sets and COGs were calculated. The distances of the COGs were compared. We are aware that the edge of a tumor is also an important factor to account for uncertainties associated with patient breathing and to determine planning target volume expansion. However, we only measured the COG for quantitative registration evaluation. The COGs were calculated based on the use of the CorePLAN radiation planning system (Seoul C & J, Seoul, Korea). Figure 3 shows the 3D CT image sets acquired for the corresponding respiratory cycles. This figure depicts amplitude differences for different phases between peak inhale and peak exhale. The peak inhale to peak exhale distance of the silicon membrane was 2 cm, and the maximum volume and minimum volume were 1505.6 cc and 1111.1 cc, respectively. Figure 4 shows the results of rigid body registration. It demonstrates volume change and target movement between peak inhale and peak exhale on axial, sagittal, and coronal views. Figure 5 shows the results of deformable registration. Figures 5(a) and (b) show peak inhale and exhale images for axial views, while Figures 5(c) and (d) show the deformable magnitude and vector after execution of the demons deformable registration algorithm. The space around the silicon balloon is filled with water, which replicates a chest mass in the phantom. The presence of water affected the strong contraction of the lower area of the balloon when power was applied, due to gravity. This effect is not relevant for a real patient as the chest mass of the patient is composed of muscles.
Figures 5(e) and (f) show peak inhale and exhale images for coronal views. Figures 5(g) and (h)
show the deformable magnitude and vector after execution of the demons deformable registration algorithm. The displacement of the target as seen on a coronal view is relatively small as compared to the displacement of the silicon membrane due to the flexibility of the sponges.
For the quantitative evaluation of deformable registration, we calculated the COGs of the phantom. Table 1 represents the residual differences between deformed image and reference image sets. The static target point was normalized as reference point 0 and was responsible for demonstrating 3D vector differences. As shown in Table 1, the vector differences ranged from 0.60 mm to 1.11 mm from the glass target to the water balloon target, and the registration accuracy was determined according to target deformation. The water balloon, which had a larger deformation, showed a larger discrepancy; the rigid glass ball showed relativity good agreement within 1 mm of the results for image registration accuracy.
IV. concLuSIonS
We have developed an anthropomorphic deformable lung phantom including specific structures for phantom efficiency and reproducibility, and have verified demons deformable registration. The accuracy of the registration was influenced by deformation of the inner target material. A deformable phantom should contain various deformable targets for the accurate evaluation of deformable registration. Validation of deformable registration was performed based on distance and vector differences between 3D reference and deformed image sets. An advantage of this evaluation method is convenient implementation for most radiation treatment planning systems. This phantom could simulate the features and deformation of a real human lung, and has a wide potential for lung ART including 4D radiation treatment planning. In addition to the verification of deformable registration, the phantom can be used to determine whether there is a dosimetric advantage to compensate doses by calculation of the cumulative dose distribution during the course of radiation treatment. Moreover, this phantom can be used to evaluate a 4D CT reconstruction algorithm and dosimetric uncertainties due to organ motion and deformation. Further studies will be needed to evaluate long-term reproducibility and the reliable application of deformable phantoms. | 2,585.8 | 2010-01-28T00:00:00.000 | [
"Medicine",
"Physics"
] |
If Virchow and Ehrlich Had Dreamt Together: What the Future Holds for KRAS-Mutant Lung Cancer
Non-small-cell lung cancer (NSCLC) with Kirsten rat sarcoma (KRAS) mutations has notoriously challenged oncologists and researchers for three notable reasons: (1) the historical assumption that KRAS is “undruggable”, (2) the disease heterogeneity and (3) the shaping of the tumor microenvironment by KRAS downstream effector functions. Better insights into KRAS structural biochemistry allowed researchers to develop direct KRAS(G12C) inhibitors, which have shown early signs of clinical activity in NSCLC patients and have recently led to an FDA breakthrough designation for AMG-510. Following the approval of immune checkpoint inhibitors for PDL1-positive NSCLC, this could fuel yet another major paradigm shift in the treatment of advanced lung cancer. Here, we review advances in our understanding of the biology of direct KRAS inhibition and project future opportunities and challenges of dual KRAS and immune checkpoint inhibition. This strategy is supported by preclinical models which show that KRAS(G12C) inhibitors can turn some immunologically “cold” tumors into “hot” ones and therefore could benefit patients whose tumors harbor subtype-defining STK11/LKB1 co-mutations. Forty years after the discovery of KRAS as a transforming oncogene, we are on the verge of approval of the first KRAS-targeted drug combinations, thus therapeutically unifying Paul Ehrlich’s century-old “magic bullet” vision with Rudolf Virchow’s cancer inflammation theory.
Introduction
In 1900, the German Nobel laureate Paul Ehrlich suggested a concept of "magic bullets" ("Zauberkugeln") to specifically target invading microbes, a concept that was subsequently adapted to describe highly specific, oncogene-targeted cancer treatments [1]. More than 100 years later, non-small-cell lung cancer (NSCLC) with activating mutations of the Kirsten rat sarcoma (KRAS) oncogene-despite representing almost one-third of all lung cancer cases-remains a tumor entity for which no fully FDA-or EMA-approved oncogene-targeted therapies exist (for a broader overview of the frequencies of known oncogenic driver events in NSCLC we refer to [2][3][4][5][6]). Accordingly, affected patients still face a dismal prognosis [7][8][9][10]. In contrast to clinically approved oncogene-targeted therapies for various other malignancies, e.g., imatinib for BCR/ABL-positive chronic myeloid leukemia (CML) or EGFR and ALK inhibitors for EGFR-mutant and EML4/ALKrearranged NSCLC, respectively [11][12][13][14], the development of a "magic bullet" against mutant KRAS has remarkably challenged scientists and physicians alike because it had long been considered "undruggable" due to biochemistry constraints [15].
Another reason for the aggressive behavior and difficulty in treating KRAS-mutant lung cancer is its highly inflammatory phenotype [16]. The first to postulate a connection between chronic inflammation and cancer in the 19th century was Rudolf Virchow, a pathologist at Berlin's Charité hospital. He had observed the presence of leucocytes Table 1. Factors contributing to the disease heterogeneity of KRAS-mutant non-small-cell lung cancer (sources).
Mutant KRAS Proteins Orchestrate the Tumor Microenvironment
The abilities of cancer cells to promote local inflammation and to simultaneously escape immune-mediated elimination are important cancer hallmarks [76]. The tumor microenvironment (TME) represents an intricate ecosystem composed of multiple noncellular and cellular components including stroma and immune cells. Cancer cells actively shape the composition and functionality of the TME by direct cell-to-cell interactions and/or by chemokine secretion. Mutant KRAS proteins play a central role in this process. KRAS-dependent effector functions increase the expression of so-called immune checkpoints like programmed death ligand-1 (PDL1), which by binding to PD1 prevents T cells from killing cancer cells [77,78]. They also restrict cancer-cell-intrinsic MHC class II expression-essential for the recognition of cancer cells by T cells [79] and impair Tcell effector functions and antitumor immunity via cyto-/chemokine-mediated (e.g., IL-1, IL-6, IL-8, GM-CSF) induction of myeloid-derived suppressor cells (MDSCs), regulatory T cells and M2-differentiated tumor-associated macrophages (TAMs) [80][81][82][83][84][85][86][87][88][89] (Figure 1). Mutant KRAS also induces NF-kB and cooperates with MYC-two master regulators of inflammation and immunosuppression [90][91][92][93]. sion of stimulator of interferon genes (STING) and of immune-related expression signatures than wild-type tumors ("cold" TME) and therefore are typically checkpoint-inhibitorresistant [75,101]. Ongoing scientific efforts are seeking to decipher the mechanistic basis for this lack of ICI response in STK11/LKB1 co-mutated tumors with the aim of developing new strategies to turn "cold" tumors into "hot" ones and thus increase the benefit of immune checkpoint blockade for NSCLC patients with this prevalent genotype [102].
Efficacy of Direct KRAS(G12C) Inhibitors and Mechanisms of Resistance
The first evidence that inhibition of oncogenic KRAS is beneficial from a therapeutic viewpoint came from immunocompetent genetically engineered mouse models (GEMMs) of lung cancer in which lung tumors completely regressed upon genetic removal of mutant KRAS [24][25][26]. In the last decade, better biochemical insights into KRAS structural biology finally allowed researchers to develop direct KRAS(G12C) inhibitors like the preclinical compounds SML-8-73-1, compound 12 ("Shokat compound"), ARS-853 and ARS-1620, as well as the clinical compounds AMG-510 (Amgen), MRTX-849 (Mirati Immune checkpoint inhibitors (ICIs) block the PDL1-PD1 receptor interaction and thus can reinvigorate antitumor immune responses in some patients with so-called "hot" tumors. ICIs alone or in combination with chemotherapy have become standard-of-care treatment for NSCLC patients whose tumors express PDL1 and lack EGFR mutations or EML4/ALK rearrangements [94][95][96][97][98][99]. These immunologically "hot" tumors are characterized by the accumulation of proinflammatory cytokines, high PD-L1 expression and intratumoral accumulation of CD8+ tumor-infiltrating lymphocytes (TILs), which are required for ICIs to be effective [100]. In contrast, immunologically "cold" tumors are deprived of TILs. Interestingly, KRAS-mutant tumors with TP53 co-mutations are associated with a "hot" TME [73,74], whereas STK11/LKB1 co-mutated tumors exhibit lower expression of stimulator of interferon genes (STING) and of immune-related expression signatures than wild-type tumors ("cold" TME) and therefore are typically checkpoint-inhibitorresistant [75,101]. Ongoing scientific efforts are seeking to decipher the mechanistic basis for this lack of ICI response in STK11/LKB1 co-mutated tumors with the aim of developing new strategies to turn "cold" tumors into "hot" ones and thus increase the benefit of immune checkpoint blockade for NSCLC patients with this prevalent genotype [102].
The phase I/II CodeBreaK-100 trial (NCT03600883) investigating the clinically most advanced KRAS(G12C) inhibitor AMG-510 (international nonproprietary name (INN) sotorasib) reported a confirmed objective response rate (ORR) of~32% and a disease control rate (DCR) of~88% among lung cancer patients for the phase I trial part. Grade 3/4 toxicities and treatment discontinuation occurred in 11.6% and 7% of patients, respectively [111]. Based on this study, in December 2020, the FDA granted breakthrough therapy designation for sotorasib for patients with KRAS(G12C)-mutant, locally advanced or metastatic NSCLC following at least one prior systemic therapy. The phase II trial part with a data cutoff on 1 December 2020 and a median follow-up of 12.2 months validated the phase I results with a confirmed ORR of 37.1%, a DCR of 80.6% and a median duration of response of 10 months. The median progression-free survival was 6.8 months. Treatment-related adverse events (TRAEs) led to treatment discontinuation in 7.1% of patients. Most TRAEs were grade 1 and 2 and included diarrhea (31% any grade), nausea (19%), liver enzyme changes (15%) and fatigue (11%) (presented at the 2020 World Conference on Lung Cancer, 28-31 January 2021).
The median follow-up of the phase I/II KRYSTAL-1 trial (NCT03785249) for Mirati's MRTX-849 compound (INN adagrasib) was still relatively short (9.6 months) at the last study update presented at the 32nd EORTC-NCI-AACR Symposium (24-25 October 2020), but the data reported for NSCLC patients appear slightly better than those for AMG-510, although longer follow-up is still required [112]. Of the evaluable patients, 45% achieved a partial response, and the DCR was 96%. TRAEs led to treatment discontinuation in 4.5% of patients and most commonly included nausea (54%), diarrhea (48%), vomiting (34%), fatigue (28%) and increased liver enzymes (23%). The only commonly reported grade 3/4 TRAE was hyponatremia (3%) (the clinical efficacy parameters are summarized in Table 2). Adagrasib has a longer half-life of about 24 h compared to sotorasib (half-life 6.5 h), which allows for continuous drug exposure and sustained KRAS target inhibition. Adagrasib also penetrates the blood-brain barrier in murine models and showed efficacy in one patient with NSCLC and active brain metastases, an important feature for this highly metastatic disease with frequent brain/central nervous system manifestation. It is most likely, however, that allele-specific KRAS(G12C) inhibitors will be combined with other treatment modalities. Monotherapies presumably have limited long-term efficacy in terms of preventing adaptive resistance since (1) tumors with minor fractions of cancer cells that harbor non-G12C KRAS mutations will ultimately relapse due to selection of these subclones [113,114] and (2) several mechanisms have been proposed for how cancer cells can lose their KRAS dependency. Among others, these include YAP pathway activation [115] and increased KRAS(G12C) expression via EGFR or aurora kinase signaling [116]. Drug combinations can furthermore account for the fact that currently available KRAS(G12C) inhibitors only target the inactive, GDP-bound form of KRAS and thus rely on the residual intrinsic hydrolysis of GTP to revert KRAS into the GDP-bound state. This mechanism is vulnerable to adaptive responses that activate upstream signaling, e.g., via receptor tyrosine kinases (RTKs) like the ErbB family or FGFR [56,58,117]. These RTKs signal through SHP2 (encoded by PTPN11), increase the GTP-loaded "ON" form of KRAS and therefore reduce KRAS(G12C) inhibitor target engagement [118,119]. Inhibition of SHP2 reduces this conversion of GDP-into GTP-bound KRAS and overcomes adaptive resistance to MAPK-pathway-targeted agents including KRAS(G12C) inhibitors [120,121]. In a patient with NSCLC, a reduction of tumor volume has been observed when adagrasib was combined with the experimental SHP2 inhibitor TNO-155 (Novartis), and the corresponding phase I/II KRYSTAL-2 study is currently recruiting patients (NCT04330664). In another phase Ib clinical trial, the combination of sotorasib and the experimental SHP2 inhibitor RMC-4630 will be investigated [122,123]. Other drug combinations (adagrasib plus pan-ErbB inhibitor afatinib; ARS-1620 plus mTOR inhibitor (everolimus) or linsitinib) follow the same biological rationale of simultaneously inhibiting KRAS and the nucleotide exchange on KRAS via RTKs [124] (for a comprehensive overview of clinical trials investigating strategies to overcome resistance to drugs targeting the KRAS(G12C) mutation we refer to [125]).
Conclusions and Future Perspectives of Combination Therapeutic Approaches
More than a century after the pioneering scientific work of Rudolf Virchow and Paul Ehrlich [1,17,18], for non-small-cell lung cancer (NSCLC), the era of cancer immunotherapywhich harnesses the immune system to kill cancer cells-follows the era of groundbreaking discoveries in the field of oncogene-targeted therapies [19,20]. However, the progress made during the "targeted therapy revolution" for EGFR-mutant and EML4/ALK-rearranged lung cancers among other oncogene-addicted pulmonary malignancies (for a comprehensive overview of oncogene-directed therapies against NSCLC, e.g., with ROS1/NTRK, BRAF, MET and HER-2 aberrations, we refer to [126]) had largely spared KRAS-mutant NSCLC despite the anticipated efficacy of KRAS inhibitors for this highly prevalent but heterogeneous lung cancer subtype [11,12,14,[24][25][26]127] (Table 1). After overcoming biochemistry constraints to directly inhibit KRAS(G12C), the most frequent mutational subtype in NSCLC, the historical assumption of KRAS as being an "undruggable" target needs to be irrevocably discarded [103][104][105][106][107][108][109][110]. First reports from phase I/II clinical trials investigating the direct KRAS(G12C) inhibitors sotorasib and adagrasib are very impressive considering this notoriously hard-to-treat patient subgroup. Response rates are slightly inferior to those observed with other oncogene-targeted therapies (e.g., against mutant EGFR or rearranged EML4/ALK) in pretreated patients, and differences in response rates could be due to the heterogeneity of KRAS-mutant lung tumors with multiple DNA-damage-associated genomic alterations [128][129][130][131]. Additional KRAS(G12C) inhibitors are continuing to emerge for which clinical efficacy parameters have yet to be reported (JNJ-74699157/ARS-3248: NCT04006301; GDC-6036: NCT04449874); for others, the clinical development has been stopped due to unexpected toxicities (LY-3499446: NCT04165031). Currently ongoing (CodeBreaK-200 for AMG-510: NCT04303780) and future randomized phase III trials will ultimately show the true benefit of direct KRAS(G12C) inhibitors in untreated patients and presumably establish KRAS(G12C) inhibitors as the frontline treatment for KRAS(G12C)-mutant lung cancers (the current status of clinical development of KRAS(G12C) inhibitors is summarized in Table 2).
To induce deeper initial tumor regressions and to prevent the emergence of resistant cancer cell clones, multidrug combinations (e.g., KRAS(G12C) inhibitors with SHP2 and pan-ErbB inhibitors) are currently being clinically evaluated. Historically, drug combinations targeting KRAS-dependent downstream pathways (e.g., continuous MEK and PI3K inhibition) have been limited by toxicities [63,64], but KRAS(G12C) inhibitors avoid wild-type KRAS reactivity and therefore are less prone to off-target effects that are believed to disturb tissue homeostasis [127]. The lack of dose-limiting toxicities observed with sotorasib and adagrasib is encouraging and seems to make them ideal partners for combination treatment strategies, including those that incorporate immunotherapy.
Sensitivity to immune checkpoint inhibitors (ICIs) has been associated with a high tumor mutational burden (TMB) [132][133][134], and therefore, the smoking-related etiology of KRAS(G12C)-mutant NSCLC with a high mutational burden predestines affected patients for immunotherapy [44][45][46][69][70][71][72]135]. ICIs have been established as standard-ofcare treatment for NSCLC patients whose tumors express PDL1 and lack EGFR mutations or EML4/ALK rearrangements as a single agent or in combination with chemotherapy [94][95][96][97]134]. However, response rates to single-agent ICIs overall are modest, and strategies to overcome this limitation are urgently required. The clinical benefit of combined PD1/PDL1 and CTLA-4 inhibition remains controversial despite FDA approval of the ipilimumab plus pembrolizumab combination [136] and comes at the cost of an increased risk of serious immune-related adverse events compared to anti-PD-1 therapy alone [137].
Due to the important function of KRAS in reducing cancer cell immunogenicity and inducing local immunosuppression (Figure 1), KRAS(G12C) inhibitors were expected to have profound effects on the tumor microenvironment (TME). In KRAS-mutant NSCLC, the TME is frequently characterized by a paucity, lack and/or dysfunction of tumorinfiltrating leukocytes (TILs), especially in the presence of co-occurring mutations in STK11/LKB1 [41,75,101]. This immunologically "cold" TME impairs the efficacy of ICIs [100], and therefore, strategies to turn "cold" tumors into "hot" ones are urgently needed. Indeed, similarly to MEK and SHP2 inhibition, sotorasib and adagrasib induced a more proinflammatory and TIL-infiltrated TME in mouse models ("reconditioning" effect) [103,121,[138][139][140]. This translated into durable complete responses in combination with anti-PD-1 therapy. Mice that were cured with a combination of sotorasib and pembrolizumab subsequently rejected KRAS(G12C)-mutant CT26 tumors, suggesting that combining KRAS(G12C) inhibitors with immune checkpoint inhibitors could even drive an acquired immune response. Adaptive rather than innate immunity offers the greatest potential for durable, robust anticancer immune responses to prevent a tumor relapse and/or metastatic spread. However, these results from preclinical models have yet to be confirmed in human clinical trials. The combination of a KRAS(G12C) inhibitor and immunotherapy could specifically benefit those patients whose tumors harbor STK11/LKB1 co-mutations (~30% of KRAS(G12C)mutant NSCLC). These tumors are linked to poor outcomes with immunotherapy and platinum-based chemotherapy [75,141,142]. Even though numerous strategies aimed at bolstering immunity against STK11-mutant tumors are currently under investigation (e.g., dual immune checkpoint inhibition with nivolumab and ipilimumab [143]), a broader spectrum of efficacious therapeutic options is urgently needed. Exploratory correlative analyses from the KRYSTAL-1 (presented at the 32nd EORTC-NCI-AACR Symposium, 24-25 October 2020) and CodeBreaK 100 (presented at the 2020 World Conference on Lung Cancer, 28-31 January 2021) trials in this context suggest higher response rates for singleagent adagrasib (64% versus 45%) and sotorasib (50% versus 42%) among patients whose tumors also harbored an STK11/LKB1 co-mutation. Even though these early findings need to be confirmed in larger clinical trials that combine sotorasib (NCT03600883) and adagrasib (NCT04613596, KRYSTAL-7) with the anti-PD-1 antibody pembrolizumab, they give us a glimpse of the extraordinary potential of these KRAS-targeted agents for this historically difficult-to-treat patient subgroup.
Other strategies to boost the immune response against KRAS-mutant cancers include STING agonists (ADU-S100, MK-1454) [101,144,145], as well as CAR-T cells (adoptive T-cell transfer) [146] and mRNA vaccine technology [147]. For the latter, the current worldwide first use of mRNA vaccine technology to fight the COVID-19 pandemic [148,149] could boost its development and acceptance in the field of oncology. In an ongoing phase I trial (V941-001), Moderna and Merck are testing mRNA-5671 alone and in combination with pembrolizumab in patients with KRAS-mutant cancers. mRNA-5671 is designed to generate and present the four most prevalent KRAS mutations (G12C, G12D, G12V and G13C) as neoantigens in host cells to the immune system to drive a more robust T-cell response (no efficacy data are publicly available yet).
A major caveat we still face today is the fact that so far no specific inhibitors of non-G12C mutations have entered clinical trials. In NSCLC, these mutations represent more than 50% of all KRAS mutations. A potential first-in-class inhibitor of KRAS(G12D), MRTX-1133, is currently in preclinical development by Mirati (to the best of our knowledge, there are no publicly available data on this compound yet). In an alternative approach, the son of sevenless 1 (SOS1) protein, which determines the nucleotide exchange on KRAS, has gained much attention as a therapeutic target to inhibit all major G12D/V/C and G13D variants. Boehringer Ingelheim's BI-1701963 and Bayer's BAY-293 "pan-KRAS inhibitors" selectively inhibit the SOS1-KRAS interaction [150,151], but unfortunately, apoptosis induction and tumor regressions were only observed when this drug class was combined with a MEK inhibitor. Phase I dose-finding studies are currently recruiting patients with solid tumors (BI-1701963 plus trametinib: NCT04111458) or are planned (BI-1701963 plus adagrasib). Other strategies to target non-G12C mutations include so-called switch I/II pocket inhibitors like BI-2852, which bind with nanomolar affinities to the active and inactive forms of KRAS [152,153], or mutant-selective "tricomplex" inhibitors, which sterically block interactions between KRAS and effector proteins such as RAF [154]. KRAS-targeting monobodies [155] and intrabodies [156] further add to the spectrum of therapeutic approaches currently under investigation.
From an oncologist's perspective, the coming years in the field of KRAS-mutant NSCLC will be exciting and extremely laborious at the same time. Future clinical trials will have to teach us which drug combinations out of the multiple available therapeutic concepts (summarized in Figure 2) are the most efficacious ones and/or which treatment sequence is optimal from a tumor evolution perspective. The establishment of potent treatment predictors and systematic analysis of on-treatment longitudinal biopsies will hold the key to a better understanding of the biology of treatment resistance and guide clinical decision-making about rationally designed subsequent treatment combinations. To speed up the efficient development of drug combinations, the refinement of KRAS-mutant GEMMs to better recapitulate the genetic and immunologic complexity of human lung tumors with a high tumor mutational burden is absolutely desirable [157,158].
Despite some limitations, it is extremely exciting to see that more than a century after the groundbreaking scientific work of Paul Ehrlich and Rudolf Virchow, we are now on the verge of therapeutically unifying the concepts of both pioneers to harness synergistic effects between immune checkpoint inhibitors with "magic bullet" KRAS(G12C) inhibitors that have the potential to "recondition" the immunosuppressed tumor microenvironment. More than 40 years after the identification of RAS as a transforming oncogene, this approach will revolutionize paradigms for KRAS-mutant NSCLC. Therefore, after many discouraging therapeutic attempts in the past, we have every reason to look to the future with optimism. | 4,328.8 | 2021-03-01T00:00:00.000 | [
"Biology"
] |
Petroleum Source-Rock Evaluation and Hydrocarbon Potential in Montney Formation Unconventional Reservoir, Northeastern British Columbia, Canada
Source-rock characteristics of Lower Triassic Montney Formation presented in this study shows the total organic carbon (TOC) richness, thermal maturity, hydrocarbon generation, geographical distribution of TOC and thermal maturity (Tmax) in Fort St. John study area (T86N, R23W and T74N, R13W) and its environs in northeastern British Columbia, Western Canada Sedimentary Basin (WCSB). TOC richness in Montney Formation within the study area is grouped into three categories: low TOC (<1.5 wt%), medium TOC (1.5 3.5 wt%), and high TOC (>3.5 wt%). Thermal maturity of the Montney Formation source-rock indicates that >90% of the analyzed samples are thermally mature, and mainly within gas generating window (wet gas, condensate gas, and dry gas), and comprises mixed Type II/III (oil/gas prone kerogen), and Type IV kerogen (gas prone). Analyses of Rock-Eval parameters (TOC, S2, Tmax, HI, OI and PI) obtained from 81 samples in 11 wells that penetrated the Montney Formation in the subsurface of northeastern British Columbia were used to map source rock quality across the study area. Based on total organic carbon (TOC) content mapping, geographical distribution of thermal maturity (Tmax), including evaluation and interpretation of other Rock-Eval parameters in the study area, the Montney Formation kerogen is indicative of a pervasively matured petroleum system in the study area of northeastern British Columbia.
Introduction
Source-rocks are precursors for hydrocarbon accumulation and reservoir potential.In general, source rocks are organic rich sediments that have, or may generate hydrocarbons [1], and are a primary element in any petroleum system [2].
Successful exploration for oil and gas depends largely upon the quality of source-rock.To determine source rock quantity, total organic carbon (TOC) content, and quality, Rock-Eval technique is used.Rock-Eval pyrolysis methods have been utilized worldwide for more than three decades as an aid to determining source-rock parameters: Tmax, TOC richness, Hydrogen Index (HI), Oxygen Index (OI), Production Index (PI), the remaining hydrocarbon generating potential (S2), and a host of other products [3]- [11].Rock-Eval pyrolysis is used to rapidly evaluate and depict the petroleum generating potentials of prospective source rocks [11] by providing information about their: 1) kerogen type and organic matter quality; 2) type of organic matter and characteristics; 3) thermal maturity of the organic matter; and 4) hydrocarbon type (oil, gas or both).
The geographical distribution of source-rocks parameters within a particular acreage of exploration objective constitutes part of the assessment mechanics of hydrocarbon exploration [11].Source-rock evaluation involves assessing the hydrocarbon generating potential of sediments by examining the sediment's capacity for hydrocarbon generation, type of organic matter present and what hydrocarbons might be generated, including sediment's thermal maturity and how it has influenced generation [12].To understand source-rock potential in Montney Formation, Rock-Eval method was utilized.
Until recently, these reservoirs were previously considered non-economical, unproductive, and non-exploitable geological formations owing to poor understanding of lithological heterogeneity and variability in mineralogy coupled with less advanced technology.However, improved technology has revolutionised unconventional or tight reservoirs.The inherent petrophysical properties of unconventional reservoirs are low matrix porosity of ≤10% and permeability of ≤0.1 mD millidarcy, exclusive of fracture permeability [20].Typically, these reservoirs depend on stimulation for production, and in general, contain large amounts of hydrocarbons; although, gas recovery factors may be low [21].
The Montney Formation in the study area is a primary focus of unconventional gas reservoir exploration in Western Canada Sedimentary Basin (WCSB) because: 1) it is a source rock rich in organic matter [22]; 2) it has a thermal maturity that lies within gas generating window, and it is primarily a gas prone mixed Type II/III kerogen [22]; 3) the present study shows that the kerogen of the Montney Formation in the study area is mainly composed of Type III/IV and some mixed Type II/III kerogen with average TOC range of 0.5% -4wt%; and upto 8.2wt% TOC (rare), but present); 4) it has a reservoir thickness upto 320 meters in the study area; 5) it hosts substantial volumes (Natural Gas reserve = 271 TCF), Liquefied Natural Gas (LNG = 12,647 million barrels), and oil reserve (29 million barrels) according to BC Ministry of Energy, Mines and Natural Gas [23]; and 6) porosity range from 2% -10%, and sporadically > 10% in some intervals where ichnofabric or dolomite dissolution have resulted in the formation of secondary porosity.These criteria make the Montney Formation an unconventional resource play with high potential within Fort St. John study area, northeastern British Columbia (Figure 1).However, despite the strong economic significance of this hydrocarbon resource hosted in finer-grained lithologies "siltstone/very fine-grained sandstone" interval, the location and predictability of the best reservoir units remains conjectural: in large part because the geochemistry, lithologic variability, and mineralogy of the Montney tight-rocks hosting thermogenic gas in the subsurface of Western Canada has not been adequately characterized [2] [18] [19].
The depositional environments interpreted for the Montney Formation in the study area is characteristic of lower shoreface through proximal offshore to distal offshore settings [2] [24].The lower shoreface environment record trace fossils attributed to the Skolithos ichnofacies [25].The proximal offshore environment have sedimentary structures formed under quiescent depositional conditions typically found below the fair weather wave base [26] such as lamination and current ripples [27]).The distal offshore environment has trace fossils attributed to distal expression of the Cruziana ichnofacies [25].The observed sedimentary structures recorded in the logged Montney Formation cores includes current ripples, deformation structures, convolute lamination/bedding, etc.The sediment deformation structures, convolute lamination/bedding formed due to mechanical forces causing plasticity, commonly related to gravity acting upon weak sediments usually silt or sands, prior to or soon after, or at deposition along the sediment surface [28] [29]; and escape traces (Fugichnia?), which are evidence of small scale episodic deposition due to local transport from the lower shoreface or proximal offshore to distal setting.
This paper concerns itself with: 1) evaluation of the Montney Formation source-rock richness; 2) thermal maturity and hydrocarbon generation in the Montney Formation; and 3) geographical distribution of Rock-Eval (TOC and Tmax) parameters in the study area.
Geological Setting
The paleogeographic location of the Western Canada Sedimentary Basin (WCSB) during the Triassic time was situated at approximately 30˚N paleolatitude based on analyses of paleomagnetic data, paleolatitude and paleoclimatic zonation [30], and fauna record [31].The paleoclimate reconstruction suggests that the paleoclimate may have ranged from sub-tropical to temperate [30] [31] [32].The region has been interpreted to be arid during the Triassic, and was dominated by winds from the west [30] [33] [34].
The WCSB forms a northeasterly tapering wedge of sedimentary rocks with thickness of more than 6000 meters, which extends southwest from the Canadian Shield into the Cordilleran foreland thrust belt [32] [35].The Cordilleran of the WCSB provides the evidence that the origin and development of the basin was associated with tectonic activity [32] [36].Later epeirogenic episodes resulted in subsidence that created the basin for sediment accumulation, which were attributed to the effects of contemporaneous episodes of orogenic deforma-tion in the Cordillera [35] [37].This is interpreted to be post Triassic, especially due to mountain influences [32].[38] [39] [40] interpreted sediment loading, evidenced by the deformed bed, slump structures and small-scale faults as indicators of tectonic influences on the deposition of Triassic successions.Within the Foothills and Rocky Mountain Front Ranges, Triassic rocks were subjected to Jurassic -Cretaceous Columbian and Upper Cretaceous -Lower Tertiary Laramide orogenies, which caused a series of imbricate thrust faults and folds in the region [41].
In Alberta and British Columbia, Triassic sediments were deposited in a central sub-basin known as the Peace River Embayment, which extended eastward from the Panthalassa western ocean onto the North American craton [41].During the Triassic period, the Peace River Embayment was a low mini basin associated with minor fault block movement associated with a broad downwarp resulted in the rejuvenation of structural deformation within the Monias areas of southwest Fort St. John, British Columbia [41].
Stratigraphically (Figure 2), the Triassic Montney Formation is Griesbachian to Spathian in age [42].The Triassic succession thickened westward [41], and rests unconformably in most areas, upon the Belloy Formation in outcrop of northeastern British Columbia; Carboniferous in parts of northeastern British Columbia and Alberta; and Fantasque in outcrop at Williston [42].The thickness of Triassic deposits is about 1200 meters in the western-most outcrop in the Rocky Mountain Foothills [43].The Montney Formation structure map (Figure 3) indicates higher paleostructure in the east and low in the western portion of the study area.This structural tilt shows a depositional thinning to the east and north due to erosional removal [2] [24] [41].
Method of Study
Drilled cores of the Montney Formation from the study area in Fort St. John vicinity, northeastern British Columbia were logged to assess sedimentological, ichnological and facies characteristics.The lithologic features and accessories, sedimentary texture, sedimentary structure, the nature of bedding contacts, and lithofacies were compiled in detail (Figure 4 and Figure 5).
Samples were crushed into powder using the pulverized shatter-box machine at the University of Alberta's rock-crushing lab.Samples were sent to Geological Survey of Canada and Chesapeake Energy Corporation, Oklahoma City, USA, for Rock-Eval analyses (Table 1).Additional Rock-Eval data (Table 2) included in this paper comes from Oil and Gas Commission, Ministry of Energy, British Columbia, and (Table 3) comes from [13].
The anhydrous pyrolysis technique used in this study evaluates oil and gas shows, oil and gas generation potential, thermal maturity and identifies organic matter type [1] [11] [44] [45] [46] [47].The Montney Formation rock samples were pyrolyzed using Rock-Eval 6. [46] described the Rock-Eval technique as an apparatus, which consists of a programmed temperature heating of a small amount of rock sample (100 mg) in an inert atmosphere (Helium or Nitrogen) Rock-Eval pyrolysis is a standard analytical method used to determine petroleum generating potential and the thermal maturity of the kerogen occurring in a rock [5] [47].The procedure consists of progressive heating the whole-rock from initial temperature of 25˚C by using the Rock-Eval 6 analyzer to measure the hydrocarbons released during the increased artificial thermal heating to 650˚C [46] as shown in Figure 6.The key parameters from Rock-Eval analyses are: 1) the total organic carbon (TOC); 2) Tmax; 3) Hydrogen Index (HI); 4) Oxygen Index (OI); 5) Production Index (PI); and 6) S2 peak.
Rock-Eval Geochemistry
Rock-Eval was originally designed for measuring the maturity of coal mackerel [5] [48].It is a useful screening technique for recognizing source rock and kerogen quality, and has become a major oil and gas exploration tool that give insights to the exploration geologist in terms of source rock characteristics, and reservoir potential.The key parameters of Rock-Eval (TOC, Tmax, HI, OI, PI and S2 values) are fundamental to determining source rock richness, kerogen type, and maturation, which altogether form critical elements in the assessment of a petroleum system, risks segments and high grading resource plays.
Description: Montney Formation Total Organic Carbon (TOC)
The TOC content of a rock is determined by oxidation under air, in an oven from the organic carbon residue after pyrolysis [46].The measured TOC values for the Montney Formation are shown in Tables 1-3.The geographical distribution of average TOC per well is shown in the study area in Figure 7.The general trend of TOC is low in the western part of study area, and TOC value increases eastwards into Alberta Province (Figure 7).TOC in the Montney Formation is variably and statistically grouped into low TOC (<1.5 wt%), medium TOC (1.5 -3.5wt%), and high TOC (>3.5 wt%).
Interpretation
TOC is an indicator of the total amount of organic matter present in the sedi-Figure 7. The Montney Formation average TOC (wt%) map within the study area, northeastern British Columbia and northwestern Alberta.The red dots represent wells with Rock-Eval data.Natural Resources ment [49].The standard criteria for ranking source-rock richness (Table 4) was proposed by [50].The hydrocarbon generating potential is commonly interpreted using a semi quantitative scale (Table 5) according to [51] [52].The Montney Formation TOC richness and distribution within the study area may be related to factors such as: 1) depositional condition of organic matter, its concentration and preservation, including oxygen content of the water column and sediment type, i.e. oxic versus anoxic as proposed by [50] [53] [54]; 2) biological productivity influence and availability of nutrient and replenishment [50] [54]), controlled by sunlight, temperature, pH and Eh of waters [52].Within the study area, the depositional environment interpreted for the Montney Formation is generally an offshore setting (inner shelf-proximal offshore to distal environment).The environment of deposition affects organic matter productivity and preservation [50] [53] [55].
Organic matter is preserved in oxygen-restricted environment at depths below wave base in waters where density or temperature stratified water columns form, or in locations where oxygen replenishment is low [53] [62].
It is hypothesize herein that the TOC distribution in the study area (Figure 7) may be related to depositional environment's proximity to organic matter source and preservation conditions.Where TOC values are greater than 2.4 wt% around Fort St. John (in a NW-SE transverse trending contour value 2) and Table 4. Criteria for ranking source rock and richness [50].7), TOC values increases eastwards into Alberta where [13] have reported TOC for the Montney Formation >4 wt% (Table 3).TOC data from well 16-17-83-25W6, provided by Oil and Gas Commission, Ministry of Energy, British Columbia, which is located outside of this study area also shows TOC upto 8.2wt% in Montney Formation (Table 2).
In the western portion of study area (west of the boundary contour value 2 in Figure 7), the TOC values are generally lower.In the eastern portion where there is higher TOC value, the area lies within the region that has been interpreted as outer shelf depositional setting.Relatively higher TOC value in this geographical region (eastwards) is probably due to increase oxidation, while reducing condition may have dominated the western portion in the study area where TOC is low in a distal/deep basinal setting.Several workers [11] [50] [53] [54] [55] [57] have reported that high TOC content or richness in sediments are related to the depositional environment, transport of organic matter and preservation.The abundant supply of nutrient and upwelling condition may have dominated the region with higher TOC values in the NE-SE portion of the study area (Figure 7).Determination of the original total organic carbon (TOC) of a source rock provides a quantitative means to estimate the total volume of hydrocarbons that it can generate depending on kerogen type [58].However, it is common practice to rate carbonate rocks with lower TOC comparable with richer clastic rock [48].Extractable Hydrocarbon yields from leaner carbonate rocks are comparable to richer clastic rocks [45] [59].The organic matter associated with carbonate rocks are often more hydrogen-rich and thermally labile than that in fine-grained clastic rocks [1] [44] [47].The Montney Formation is partly dolomitic and has variable TOC contents ranging from poor to excellent using the standard TOC richness metrics (Table 4).The low TOC content in Montney Formation in the study area may be related to the mixed siliciclastic-dolomite composition.
Description: Montney Formation Hydrogen Index and Oxygen Index
The Oxygen Index (OI) measure in mgCO 2 /gTOC is calculated from the amount of CO 2 released and trapped at temperature ranging from 300˚C to 390˚C (Figure 6) during pyrolysis [46].The Oxygen Index corresponds to the quantity of carbon dioxide from S3 peak (Figure 6) relative to the TOC (mgCO 2 /gTOC); while Hydrogen Index (HI) corresponds to the quantity of pyrolyzable organic compounds or "hydrocarbons" (HC) from S2 peak relative to the total organic carbon (TOC) according to [11].The hydrogen index (HI) was calculated from the ratio of S2/TOC using the method of [47].
In 8), mostly less than 160 and a couple of data point have exceptionally high HI and OI, which maybe outlier (Figure 8).
Interpretation of Hydrogen Index (HI) and Oxygen Index (OI)
The Hydrogen (HI) and Oxygen (OI) indices are used to determine the type of kerogen (Table 6) present in a source-rock [11] [46] [47] [85].Based on the data plot of HI and TOC on the pseudo Van Kravelen diagram, it shows that the Montney Formation in the study area is primarily a Type III/IV kerogen with some mixed Type II/III kerogen (Figure 8-10, Table 6).For organic matter to generate hydrocarbons, the carbon has to be associated with hydrogen [12]).Kerogen is mainly classified into three types: Type I, Type II, Type III, [1] [45] and Type IV [53].Kerogen types are defined on the basis of hydrogen/carbon (H/C) and oxygen/carbon (O/C) values, i.e., Hydrogen Index (HI) and Oxygen Index (OI) according to [54] [60] [61].The use of Van Krevelen diagram was extended by [45] from coals to include kerogen dispersed in sedimentary rocks.
Type II Kerogen
The analyzed Montney Formation sediments in the study area show that Type II kerogen is present in the Montney Formation (Figure 9).Type II kerogen is oil prone [11], relatively rich in hydrogen and characterized by its pure (monomaceral) form of exinite [54].Examples of materials from which Type II kerogen are derived are spores and pollen grains of land plants, marine phytoplankton cysts, some leaf and stem cuticles [48] [54].The occurrence of Type II kerogen depends on high biological productivity due to nutrient supply, low mineralogical dilution, and restricted oxygenation [54].
Type III Kerogen
Type III kerogen is present in Montney Formation sediments in the study area (Figure 9) Using the S2 values (remaining hydrocarbon generating potential) versus TOC, the ratio of Type III kerogen to Type IV kerogen is approximately 3:1 (Figure 11).[11] described Type III kerogen as primarily a gas prone kerogen, which contains dominantly vitrinite, and it is identical to macerel of humic coal [48]) formed from land plant, or largely woody and cellulosic debris [48].
However, various macerel mixtures or degradational processes can contribute to the Type III kerogen formation [54].Type III kerogen is the most reliable kerogen to estimate in terms of the degree of maturation using Tmax [47].
Type IV Kerogen
Analyzed data in this study shows that Type IV kerogen constitutes the highest percentile in the Montney Formation.[48] [50] [53] [54] defined Type IV kerogen as inertinite (gas prone), composed of hydrogen poor (HI ≤ 50) constituent, difficult to distinguish from Type III kerogen by using only Rock-Eval pyrolysis [54].A graphical plot of S2 versus TOC with pseudo HI indicates that Type IV kerogen constitutes about 80% of the kerogen based on Rock-Eval dataset (Figure 11, Figure 12 and Tables 1-3).Type IV kerogen is formed from materials of various origin, and has undergone extensive oxidation, or may in some cases represent detrital organic matter oxidized directly by thermal maturation, sedimentological recycling of materials [54], or organic facies that has been reworked from a previous depositional cycle [48] [50] [53] [54].
Thermal Maturity
Thermal maturity of organic rich sediment is the resultant effect of temperature driven reactions dependent upon time duration that convert sedimentary or- ganic matter (source-rock) to oil, wet gas, and finally to dry gas and pyrobitumen [50].Thermal maturity is conventionally classified into three categories: 1) immature; 2) mature; and 3) post-mature sources rocks [48] [50].Knowing a rock's remaining source-rock capacity solves only one part of the source rock evaluation puzzle; it is also necessary to know what level of thermal maturity is represented by the source rock [12].Maturity can be estimated by several techniques [11] [45] [46] [47] [48].In this study, Tmax and vitrinite reflectance (Ro) measurements were used in the determination of thermal maturity of Montney Formation in the study area.The key to using maturity parameters effectively lies in evaluating the measured data carefully (and sometimes with skepticism), and whenever possible, it is better to obtain more than one maturity parameter [48].Thus, Tmax, vitrinite reflectance (Ro) and Production index (PI) were interpreted separately in this study, and then, a comparison between the three maturity parameters were synchronized to verify similarity or dichotomy between the three data.
The amount and composition of hydrocarbons generated from a particular kerogen vary progressively with increasing maturity [55].Thermal maturity of kerogen is commonly measured using Tmax and virtinite reflectance [3] [12] [63] [64], however, there are other parameters that are used as indicators of thermal maturity [48] [51].Tmax and transformation ratio for organic matter (OM) Type 1, II and Type III/IV, shows that the maximum paleotemperatures and vitrinite reflectance indicates the level of kerogen maturity [45].
Description: The Montney Formation Thermal Maturity-Tmax
Tmax is defined as the maximum pyrolysis temperature at which the maximum
Interpretation of Tmax
The interpretation of thermal maturity using Tmax criteria of [51] indicates that more than 90% of the Montney Formation samples reported in this study are thermally matured (Figure 12 and Figure 13).The geographical distribution of Tmax values in the study area prompted a consideration of what might be the controlling factors on thermal maturity and the relationships with geothermal gradient in the study area (Figure 13).The understanding of the geothermal regime in sedimentary basin is important for the studies of the evolution of a sedimentary basin as well as accumulation of hydrocarbons and other energy resources [66].The generation of hydrocarbons (oil and gas) from any basin is dependent on the temperature reached by the organic-rich source rocks during their burial history [67].Several workers [68]- [80] have reported heat transfer processes (convection and conduction), observed geothermal pattern, thermal and hydraulic conductivities, heat generated internally in the crust by the decay of radioactive elements, regional scale distribution of geothermal gradient, hydrogeological effects in establishing geothermal pattern, and statistical distribution of geothermal values in Western Canada Sedimentary Basin.Using geothermal calculations of [67] [72] [73] [74] [75] in Western Canada Sedimentary Basin, a comparison of the distribution of Tmax in the study area shows no par-Natural Resources ticular striking relationship with the distribution of geothermal gradient owing to the small size of the study area.There appears to be no distinct distribution of Tmax values.[75] shows a regional-scale (basin-wide) distribution of the internal geothermal gradient across the entire Western Canada Sedimentary Basin, which shows a NW-SE increase in geothermal gradients.[69] reported a northerly trending increase in heat flow, which was interpreted to be caused by crustal thinning.The controlling mechanisms of heat transfer in the Western Canada Sedimentary Basin are conduction and convection by moving fluids or flow of formation water [75], and hydrogeological effects [72]- [80].This interpretation of geothermal distribution provides the underlying factors responsible for Tmax values in the study area.The geothermal gradient provides the answer to thermal maturity differences evident by the Tmax values in the Montney Formation within the study area in northeastern British Columbia (where the Montney Formation is mainly a gas prone reservoir) and in Alberta (the Montney Formation is mostly oil prone).The type of hydrocarbons produced (oil vs. gas) in the two Provinces (British Columbia and Alberta) from the Montney Formation is interpreted herein to be related to geothermal gradient that differentially affected source-rock thermal maturity in British Columbia and Alberta [75] [69].The differential heating of the Montney Formation Kerogen at different temperatures (higher) in British Columbia than in Alberta (lower) as shown by [69] is responsible for the type of hydrocarbon that have been generated in Montney Formation in British Columbia and in Alberta.
Description: The Montney Formation Thermal Maturity-Vitrinite Reflectance
The vitrinite data analyzed from the Montney Formation in this study is shown in Table 7.The available organic matter for each samples analyzed varies from 0 -6 (Table 7).The vitrinite particles available for analysis in the analyzed samples range from 0 -4.The measurement of vitrinite particles involves recording of the percentage of incident light, usually at a wave length of 546 nm, reflected from vitrinite particles under oil immersion [61].The none availability (zero values in Table 7) of vitrinite particles, and very low vitrinite particles in the organic matter composition resulted in low level of confidence as shown in Table 7 (using a ranking scale 0 -9).The level of thermal maturation of Montney Formation kerogen as revealed by vitrinite reflectance (R o ) analysis shows that data values range from (Ro 0.74% to 2.09%).Samples that have no vitrinite particles to measure are designated null (zero values) of vitrinite in Table 7.
Interpretation of Vitrinite Reflectance (Ro)
Vitrinite is a type of kerogen particle formed from humic gels thought to be derived from the lignincellulose cell walls of higher plants [81].Vitrinite is a common component of coal, and the reflectance of vitrinite particles was first observed to increase with increasing time and temperature in a predictable manner in coals [82].Based on the vitrinite reflectance data from Montney Formation in the study area, the results indicate that vitrinite reflectance (R o ) range from 0.74% -2.09%, which is interpreted herein as primarily a gas prone kerogen (Figure 14) using standard vitrinite interpretation criteria (Table 5) of [51].This interpretation has credibility because it corresponds to the same indication of gas window maturity using Tmax interpretive standard of [51] as shown in Figure 12.However, it is common, or not unusual to encounter low availability of vitrinite particles during laboratory analysis as seen in some of the samples shown in Table 7.
The low, or none availability of vitrinite particles can result to difficulty in differentiation of primary vitrinite coupled with insufficient grains to make a reliable determination of the reflectance of the samples constitute factors that affect the quality of vitrinite reflectance [64].Similarly, inconsistencies or error can result from the measurements of vitrinite reflectance [12] [83], and variation in chemical composition of vitrinite may lead to invalid comparison of vitrinite gradient [64].Although the aforementioned analytical mechanics makes vitrinite reflectance results to be viewed with skepticism [48], the method remains useful and conventionally implored in thermal maturity determination [63].
Vitrinite reflectance in source-rock kerogen is related to the hydrocarbon generation history of sediments [64].Vitrinite reflectance has been successfully used to demonstrate the reliability of the technique as indicator of organic maturation in source-rock, indicating potential areas of oil and gas generation within a prospect [50].Vitrinite reflectance (R o ) is one of the methods used in evaluation of thermal transformation of organic-rich sedimentary rocks [63] in hydrocarbon exploration [1] [45] [48] [57].Vitrinite increases during thermal maturation due to complex, irreversible aromatization reactions [50].It has been established that vitrinite reflectance correlates well with coal rank, which is primarily a function of time and temperature [60].
The thermal transformation of vitrinite can be related to geothermal and paleotemperature [64], which proceeds by a series of irreversible chemical reactions that cause organic matter alteration due to thermal cracking [63] [84].Thus, vitrinite reflectance is used as thermal maturation indicator that provides a means of determining the maximum temperature exposure of sedimentary rocks [63] [84].
Description: Thermal Maturity-Production Index (PI)
The production index (PI) data in Montney Formation from the Rock-Eval analysis shows that PI has very low values (range from 0.11 to 2.6).More than 90% of PI values from the study area are less than 1.The relationship between production index (PI) and Tmax is shown in Figure 15.
Interpretation of Production Index (PI)
The production index (PI) is also a parameter that is used in conjunction with other thermal maturity parameters to indicate type of hydrocarbon generated [50], and was interpreted based on the geochemical parameters describing thermal maturation (Table 8).The PI values in this study indicate that the Montney Formation sediment is mostly matured and post matured (Table 8 and Figure 15).Natural Resources
Porosity Data-Description
Approximately thirty data point from the Montney Formation samples were analyzed for porosity (porosity of bulk volume and gas filled porosity) in relation to depth (Figure 16).The data show a side-by-side porosity value that nearly mimic bulk volume porosity and gas filled porosity (Figure 16).The highest value of porosity (Table 9) from well 16-17-82-25W6 is 5.67% and lowest value is 1.22%.Some cores of the Montney Formation have porosity greater than 5.6% (Figure 17).Visual observation of porosity from thin-section petrographic analysis revealed vuggy porosity (Figure 18).
Interpretation of Porosity
Porosity is dependent on grain texture, which is determined largely by grain shape, roundness, grain size, sorting, grain orientation, packing 9).Observed vuggy porosity in some interval in the Montney Formation is associated with biogenic modification of textural fabric (Figure 18).The observed porosity in thin-section is partly associated with organic matter dissolution and replacement by pyrite, and biogenically produced secondary porosity.Also, relatively higher porosity in the Montney Formation is associated with bedding plane fractures.
Bedding plane porosity observed in the Montney Formation results from varieties of concentrated parallel lamination to bedding planes.The larger geometry of many petroleum reservoirs are controlled by such bedding planes primarily formed by the differences of sediments calibre or particle sizes and arrangements influenced by the depositional environment [85].
Permeability Data Description
Measured pressured decay permeability from cores (Figure 19) shows very low permeability values that range from 0.000337 to 0.000110 mD.The statistical cyclic pattern in variation (Figure 19).
Interpretation Permeability
Apart from the porosity of a reservoir, the ability of the rock to allow the flow of fluid through the interconnected pores, which is permeability (kv = kh), is a crucial reservoir parameter in the evaluation of any oil and gas play.The permeability of a rock depends on its effective porosity; which is controlled by grain size distribution, degree of sorting, grain shape, packing, and degree of cementation [84] [86].The evaluation of permeability of heterogeneous clastic rocks from core or downhole is one of the most important goals of reservoir geoscience [87].
The results from permeability analyses in this study are related to the overall textural heterogeneity, porosity, and in part, related to ichnofabric modification.
The Montney Formation is composed of dolomitic, silt-size grains and subordinate very fine-grained sandstone.The implication of grains-size in-terms of permeability is in relation to the fact that smaller grain-sizes have smaller permeabilities than those with larger grain-sizes because smaller grain-sizes will produce smaller pores and smaller pore throats, which can constrain the fluid flow in a manner lower than flows in larger grains, which produce larger pore throats [86].Furthermore, the smaller the grain-size, the larger the exposed surface area to the flowing fluid, which leads to larger friction between the fluid and the rock, and hence lower permeability [86]).[88] have shown that there is strong correlation between permeability and grain-size of unconsolidated sands and gravels, with permeability increasing exponentially with increasing grain-size.
Intervals were bedding plane fractures and ichnofabric modification occur shows relatively higher values in permeability.The observed porosity in thin-section (micron scale), shows that the porosity is associated with: 1) dissolution of organic matter or dolomitic material caused by diagenesis; 2) bioturbation-enhanced porosity resulting from burrows by organisms; and 3) fracture porosity along bedding planes.[89] [90] have shown that reservoir enhancement in unconventional thinly bedded, silty to muddy lithologies of unconventional reservoir with low permeability can be enhanced by the activity of burrows.
Fluid Saturation-Data Description
Data analyzed for fluid saturation (gas saturation, mobile oil saturation, water saturation, and bound hydrocarbon saturation) indicates that water saturation is the second highest fluid, next to gas saturation; while, mobile oil saturation and bound hydrocarbon saturation (Figure 20) are negligible in comparison with gas saturation (Table 9) or water saturation.By far, gas saturation is very high throughout the interval of measurement, yielding as high as 99.56% at the depth of 2330.42m and the lowest value of gas saturation is 70.25% at the depth of 2415.82m(Figure 20, Table 9).
Interpretation of Saturation
The amount of fluid in pore volume of a rock occupied by formation fluid (oil, gas, and water) refers to fluid saturation [91].Results from this study shows that gas saturation is the most dominant fluid in the interstitial pores of the Montney Formation (Figure 20) varying from 99.64% to 62.59% through the depth profile.The oil saturation shows a near consistency graph level, particularly indicating a very low (0.81% to 7.64%) oil saturation through the depth profile.The implication of high gas saturation confirms that the Montney Formation in northeastern British Columbia is mainly a gas reservoir.Water saturation varies significantly in an inversely proportional correlative pattern with gas saturation.
The relationship of water saturation with gas saturation is interpreted in relation to the proportion of the ratio of gas to water in the pore volume.The relative low water saturation is crucial because water in pore space of low-per-meability occupies critical pore-throat volume and can greatly diminish hydrocarbon permeability, even in rocks at irreducible water saturation [92].Because of small pore-throat size, low-permeability, gas-producing sandstones are typically cha-
Source-Rock Quality
For source-rock to have economic potential or exploration prospect, sufficient organic matter (OM) must have generated hydrocarbons.The measure of the quality of source-rock is the total organic carbon (TOC) content, and the guidelines for ranking source rock quality were proposed by [11]: 1) poor TOC richness range from 0.00 -0.50 wt% in shale; while in carbonates TOC range from 0.00 -0.12 wt%; 2) fair TOC range from 0.50 -1.00 wt% in shale; while in carbonates TOC range from 0.25 -0.50 wt%; 3) good TOC range from 1.00 -2.00 wt% in shale; while in carbonates TOC range from 0.25 -0.50; 4) very good TOC range from 2.00 -4.00 wt% in shale; while in carbonates TOC range from 0.5 -1.00 wt%; and 5) excellent TOC starts at values >2.00 wt% in shale; while in carbonates TOC must be >1.00wt%.
Using the premise above as proposed by [11], the Montney Formation in the study area, has TOC content that is variably and statistically distributed in the order of highest percentile into low TOC (<1.5 wt%), medium (1.5 -3.5 wt%), and high (>3.5 wt%).Based on these results, the Montney Formation in the study area has good total organic carbon (TOC) richness (Figure 21).In addition to the TOC content, The Montney Formation Kerogen has been interpreted and classified into: 1) Type III kerogen, which is primarily a gas prone kerogen [11] [12] [48]; 2) Type IV kerogen, which is inertinite (gas prone), composed of hydrogen poor constituent, difficult to distinguish from Type III kerogen by using only Rock-Eval pyrolysis; and 3) mixed Type II/III kerogen, which is oil prone [11] [46] [48] relatively rich in hydrogen and characterized by materials such as spores and pollen grains of land plants, marine phytoplankton cysts, some leaf and stem cuticles [48] [54].
Thermal Maturity
The Montney Formation exhibits different thermal maturities (immature, mature, and post-mature).However, statistical distribution of the Tmax values in the Montney Formation within the study area shows that >95% of the reported Tmax values are within 430 and 528 Tmax, which is within gas window [51].
The vitrinite reflectance (Ro) results in this study shows that the Montney Formation in the study area is thermally matured, and it is composed mainly of gas with some oil (Figure 14).A comparison of the Tmax data, vitrinite reflectance data, and production index (PI), which show strong correlation in terms of using multiple maturity parameters as argued by [48] as a better method of assessing the accuracy of thermal maturity index.Tmax, R o and PI (Figure 12, Figure 14 and Figure 15) produced the same thermal maturity, thus, the data boost the credibility of the thermal maturity synthesized and reported for the Montney Formation herein.However, some of the data points also indicates poor to fair.
Conclusions
Source-rock geochemistry evaluation is a pivotal step in the assessment of hydrocarbon reservoir.The Montney Formation source-rock characteristics presented in this study shows that TOC is statistically distributed into low (<1.5 wt%), medium (1.5 -3.5 wt%), and high (>3.5 wt%).The analysis and interpretation in this study shows that the Montney Formation in the study area is rich in TOC, and thermally matured.The type of hydrocarbon associated with the Montney Formation is mainly thermogenic gas, derived from kerogens of Type III/IV and mixed Type II/III kerogen.Thermal maturity Geographical distribution in the study area shows that the kerogen is pervasively matured in the study area.
The prospect and potential of hydrocarbon exploration is driven and dependent upon economics.Primary factors of significant importance used as a yard-
Figure 1 .
Figure 1.Location map of study area showing wells (red color) that penetrated Montney Formation in northeastern British Columbia and Alberta, Canada.
Figure 2 .
Figure 2. Type log of the Montney Formation in the study area, northeastern British Columbia, Western Canada Sedimentary Basin (WCSB), adapted from [24].
Figure 3 .
Figure 3. Structure contour map of the Montney Formation in the study area, northeastern British Columbia.Dash contour lines indicate no data point for well control.The structure map decreases in elevation westward, which indicates that sediment source area was from east, and prograded westward [2].
Figure 6 .
Figure 6.Rock-Eval pyrolysis for Montney Formation sample (well 2-19-79-14W6, depth: 2085 m).(a) illustrates the effect of pyrolysis temperature with Rock-Eval.The S1 peak is the free hydrocarbon liberated during thermal decomposition at less than 300˚C.The S2 peak is derived from the conversion of total organic matter to kerogen during pyrolysis (pyrolyzed fraction).The S2 corresponds to the maximum temperature (Tmax); (b) shows the S3 peak (CO 2 ) corresponding to 400˚C, which represents the oxidation of CO 2 .It also shows the difference in organic matter; (c) illustrates the pyrolysis carbon monoxide (CO); (d) shows the oxygen indices.The determination of oxygen index (OI) is based on using CO 2 and CO; the CO = S3CO × 100/TOC Total oxygen index (OI) = CO 2 + OI (CO); (e) shows the S4 peak, the oxidation carbon monoxide (CO); the peak shows the present of siderite mineral (400˚C -600˚C); (f) Oxidation of CO and CO 2 .The red line is the temperature trace in 25 minutes from 300˚C to 650˚C.Distinctly bi-modal curve is due to pyrobitumen.
Figure 8 .
Figure 8. Shows a plot of Oxygen Index (OI) vs. Hydrogen Index (HI).The low OI and HI indicate that the Montney Formation in the study area is primarily a Type III/IV kerogen, with a mixed Type II/III kerogen.
Figure 9 .
Figure 9. Pseudo Van Krevelen diagram showing kerogen types and TOC richness in the Montney Formation.
Figure 10 .
Figure 10.Shows the remaining hydrocarbon generating capacity (S2 peak) in the Montney Formation.
is released by kerogen[5].It is the maximum S2 peak in Rock-Eval pyrolysis (Figure10and Figure11), the point at which the abundance of artificially generated hydrocarbons are at the greatest as a result of ramping up of temperature upto 550˚C[46].The macromolecular kerogen network is cracked during pyrolysis to give an estimate of the thermal maturity of a source rock[3] [5][65].Tmax values in analyzed core samples from the Montney Formation in the study area range from Tmax 347 to Tmax 526 (Tables1-3).The average Tmax values range from Tmax 423 to Tmax 567 from each well and were plotted as Tmax contour map to show the geographical distribution of thermal maturity within the study area in Fort St. John, northeastern British Columbia.Statistical distribution of the analyzed Tmax values for the Montney Formation in the study area shows that >90% of the reported Tmax values are within Tmax 450 and Tmax 528.
Figure 12 .Figure 13 .
Figure 12.Thermal maturity of the Montney Formation determined with Tmax and vitrinite reflectance (Ro).The dotted line (Ro) is vitrinite reflectance calibrated with Tmax.This shows that the Montney Formation in northeastern British Columbia is extensively matured[24].
Figure 14 .
Figure 14.Total Organic carbon (TOC) vs. vitrinite reflectance (R o ) showing the thermal maturity of Montney Formation source-rock, and hydrocarbon generating phases in the Montney Formation sediments from study area, northeastern British Columbia [24].
Figure 15 .
Figure 15.Shows that the Montney Formation kerogen is thermally matured, and extensively composed of gas.
Figure 16 .
Figure 16.Shows porosity and gas filled porosity of the Montney Formation (well 16-17-82-25W6).The graph shows very excellent correlation between porosity and gas filled porosity.(Data source: B.C. Oil and Gas Commission).
Figure 18 .
Figure 18.Microphotograph showing dolomitic siltstone facies and the associated vuggy porosity resulting from dissolution of material.Yellow arrow labeled "P" is pointing to vuggy porosity.
Figure 21 .
Figure 21.A cross plot showing total organic carbon (TOC wt%) values and S2 values (the amount of hydrocarbon formed during thermal decomposition of kerogen).The higher S2 values indicate greater hydrocarbon generating potential.In general, the S2 and TOC values show that the Montney Formation in the study area, northeastern British Columbia has good source rock quality.However, some of the data points also indicates poor to fair.
Figure 22 .
Figure 22.Isopach map of the Montney Formation showing gross thickness in the study area, northeastern British Columbia[24].
Table 1 .
Rock-Eval data from the Montney Formation, Fort St. John study area and environs, northeastern British Columbia, Canada.
Table 2 .
Rock-Eval data from the Montney Formation (outside of study area), northeastern British Columbia.Data source: B.C Oil and Gas Ministry of Energy, British Columbia.
the Montney Formation samples analyzed in this study, which shows that the HI is statistically distributed into three categories in the order of highest percentile: low HI values (0 -150); medium values (150 -300); and high values (300 -900).Of these categories, ~88% of the values are within the low HI values, while about 10% falls into the category of medium values; 2% are of the high E. I. Egbobawaye DOI: 10.4236/nr.2017.811045731 Natural Resources values bracket.The OI values are very low (Figure
Table 7 .
Vitrinite reflectance measured from the Montney Formation sediments in British Columbia. | 9,524.2 | 2017-11-30T00:00:00.000 | [
"Geology"
] |
Power brushing and chemical denture cleansers induced color changes of pre-polymerized CAD/CAM denture acrylic resins
Denture wearers are advised to follow the protocol of using both mechanical and chemical hygiene methods. In this study, the in-vitro color stability of heat-cured, light-cured and newly developed pre-polymerized CAD/CAM acrylic resin base materials were evaluated after exposure to mechanical brushing and chemical denture cleansers. Two polymethyl methacrylate (PMMA) (heat-cured, and pre-polymerised CAD/CAM) and one urethane dimethacrylate based resin denture base material were subjected to mechanical brushing, followed by immersion in chemical denture cleansers (Corega, 5.25% Sodium hypochlorite (NaOCl), and 0.2% chlorhexidine gluconate (CHG)) and thermal-cycling to simulate one-year of normal prosthesis use. Baseline and final color measurements were determined and the difference in color was calculated using bench-top UV light visible spectrophotometer. The highest (29.69 ± 1.84) and lowest (19.03 ± 8.78) mean ΔE was observed with light-cured and CAD/CAM materials immersed in 0.2% CHG, respectively. Tukey’s post-hoc test showed that heat cured and light-cured resins immersed in either of the denture cleansers showed no significant differences (p > 0.05) in the mean ΔE values. On the contrary, CAD/CAM materials immersed in either of the denture cleansers demonstrated significant differences in the mean ΔE values (p ≤ 0.05). A statistically significant interaction between the combination of materials and denture cleansers (F = 4.890; p = 0.001) was observed. The color stability of the pre-polymerized CAD/CAM acrylic discs is comparatively better than the conventional acrylic resin materials. The changes in the color values of all the tested materials were above the clinically acceptable range, regardless of the type of denture cleanser used.
Introduction
Acrylic resins have been widely used in removable dental appliances, maxillofacial appliances, and surgical guides since their introduction to dentistry in the 1930s [1].Acrylic resins provide a number of advantages, including low cost, ease of handling, mechanical and physical characteristics, and a pleasing appearance.On the other hand, it conveys some disadvantages as allergic reactions, color change, abrasion, and porosity [2,3].
The mixing ratios of resin components, polymerization device, polymerization time and technician are the factors to be considered during the fabrication of conventional PMMA resins [4].Recently, CAD/CAM PMMA based polymer blocks have been produced to be used as denture base material [4][5][6][7], with the claim that it has better mechanical properties than conventional resins [8].Previous studies have shown that CAD/CAM PMMA based polymer blocks have reduced residual monomer release, hence, improved optical properties, and color stability [9,10].The CAD-CAM denture base has the least color change because it can be well polished, has no porous structure, has less water absorption, and decreased wear [10].
The color and esthetic property of the denture base is one of the prime properties considered in the denture, in order to meet the patients' expectations [11,12].Ideally, it should match the color and appearance of the surrounding tissues [13].There are multiple factors affecting the color stability of denture base materials such as water absorption, stain accumulation, intrinsic pigments degradation, ingredients dissolution, food, beverages, chemical disinfectants and surface roughness [14,15].Furthermore, color is influenced by the resin matrix composition and its polymerization process [16,17].
Denture cleaning methods can vary from mechanical by manual brushing, chemical by the use of household bleach or commercially available cleansers, or a combination of both mechanical and chemical.Although mechanical cleaning is an effective and easy means of plaque control, it can cause wear of the denture base materials, resulting in defects on the surface of the acrylic resin denture, which promotes biofilm growth and pigmentation [18,19].On the other hand, chemical cleaning methods have a variety of options in regard to the active agent as; enzymes, hypochlorite solutions, and peroxide solutions, with the superiority of hypochlorite solutions and peroxide solutions over the enzymes [20][21][22][23].Cleansers are favoured by removable denture patients, due to their ease of handling, affordability, and cost-benefits [24,25].The cleansers have been shown to be effective in reducing biofilm formation on complete dentures [26][27][28].
Chlorhexidine is an antiseptic agent with a broad spectrum of various antimicrobial activities including bacteria, viruses, and yeast species [29][30][31][32].Chlorhexidine gluconate is a water-soluble molecule with a physiological pH and dissociability, thereby allowing the release of positively charged chlorhexidine which is attracted by negative charges of bacteria [24].Thus, chlorhexidine has been a common antiseptic of choice for disinfection of dentures infected by Candida Albicans.However, chlorhexidine solution has been shown to negatively affect the hardness and roughness of acrylic resins, as well as, the inducing of brownish discolouration of the acrylic base [33].Chlorhexidine gluconate at a 0.2% concentration, has been successfully used as an antiseptic oral rinse in the treatment of denture stomatitis, while the 2.0% suspension is used as overnight denture disinfection [30,33,34].
Sodium hypochlorite has shown to be effective in opposition to various microbes and has an effect on microorganisms penetrating 3mm of the depth of surfaces [35,36].They have also be demonstrated to be more effective than brushing against some specific microorganisms [37,38].
The sodium perborate in the effervescent tablets is a chemical soak-type product that swiftly decomposes in water to produce an alkaline peroxide solution.This peroxide solution then releases oxygen, allowing for mechanical cleaning by oxygen bubbles in addition to chemical cleaning [39].Corega/Polident (GlaxoSmithKline plc) is composed of various active agents including sodium bicarbonate, citric acid, sodium perborate monohydrate, potassium peroxymonosulfate, sodium benzoate, sodium lauryl sulfoacetate, peppermint flavor, subtilisin [36].
Until now, there is little evidence regarding the effect of denture cleansers on the color stability of newly developed pre-polymerized CAD/CAM acrylic resins.The aim of the present in-vitro study was to evaluate and compare the color stability of conventional heat and light-cured and pre-polymerized CAD/CAM acrylic resins after being subjected to mechanical brushing, immersed in chemical denture cleansers, and thermal-cycled.The hypothesis was that the color changes of pre-polymerized CAD/CAM acrylic resins would demonstrate superior color stability compared to conventional acrylic resins.
Sample size was determined using G * Power v. 3.1.9.3 freeware (Heinrich-Heine-Universität Düsseldorf, Germany).The test determined a minimum of six samples per group at a 0.6 effect size (f), 0.8 power, and α=0.05.The sample size was, however, increased to ten to compensate for any specimen loss during the study.
Specimen preparation
Heat cured PMMA acrylic resin discs were fabricated following the flasking and pressure-pack method [40].Following deflasking, a steam jet (Wassermann Dental-Maschinen, Hamburg, Germany) was used to clean the discs, and surplus flash was removed with carbide cutters (Black Hawk Cutter, GmbH & CIE, Berlin, Germany).Light cured resin discs were fabricated using a silicon putty mold lined with a separating medium.This was followed by finger pressure compaction of baseplate acrylic resin into the mold, and then air-barrier coated (Eclipse ABC, Dentsply Trubyte, USA) to prevent the polymerization inhibition from oxygen.The specimens were then polymerized for 10 min using a visible light curing unit (Eclipse Processing Unit, Dentsply Trubyte) operating at a wavelength of 400 to 500 nm [41].The pre-polymerized CAD/CAM specimens were designed with Zenotec CAD software (Wieland Digital Denture, Ivoclar Vivadent, Liechtenstein) succeeded by milling using Zenotec select ion (Wieland Digital Denture Ivoclar Vivadent, Liechtenstein).
All the specimens were then engraved with a number (1-30) and a notch for orientation purposes before finishing and polishing (figure 1(a)).The specimens were finished with sequential use of waterproof silicon carbide paper at 300 rpm under water cooling.The polishing was done using pumice in a compact unit (Derotor, London, England) that included a polishing lathe, a 45-mm polishing brush, and a pleated buff nettle cloth (Renfert GmbH, Industrie-Gebiet, Hilzingen, Germany).Finally, the specimens were cleaned with a plain toothbrush under running water followed by a steam jet before baseline color measurements.The specimens were further allocated into three groups as per the chemical denture cleansers used (n=10).
Mechanical brushing
After baseline color measurements and allocation, the specimens were exposed to mechanical brushing using an electric brush (Oral-B PRO 1000, Leicester, United Kingdom) as per the manufacturer's recommendations.The electric brush was fixated on a customized stainless steel holder (figure 1(b)) for standardization purposes of both pressure and direction.The CrossAction brush head had bristles at 16°angulations with 3D cleaning involving combined action of 20,000 pulsations, and 8,800 rotations per minute, respectively.Brushing was performed for 60 min that was equivalent to one year of oral use [42].A constant force of 2N was applied by means of weights that pressed onto the head of the toothbrush throughout the brushing process.
Immersion in denture cleanser solutions
Ten specimens from each group were completely immersed in chemical solutions containing either one denture cleanser tablet (Corega, GSK, Brentford, United Kingdom) dissolved in 250 ml water, or 5.25% Sodium hypochlorite (NaOCl), or 0.2% Chlorhexidine Gluconate (CHG) (figure 1(c)).The immersion process simulated the overnight or nocturnal immersion of 8 h a day, and the immersion process was repeated for a period of 2,880 h simulating one year of prosthesis use.For each immersion cycle, the resin discs were rinsed under tap water for 10 s, dried with absorbent paper, and immersed in a fresh solution.
Artificial ageing by thermal-cycling
The specimens were thermal-cycled in a thermo-cycler device (Huber 1100, SD Mechatronik, Feldkirchen-Westerham, Germany).The specimens were soaked alternately in hot (55 °C) and cold (5 °C) water baths with a holding time of the 30s, and a transfer time of 10s.A total of 10,000 cycles used in this study simulated one year of prosthesis use [43].
Color measurements
Finally, all the specimens were subjected to final color measurement in a manner similar to the baseline measurements.The color measurements at baseline and after treatments for all the resin discs were performed by a single operator for the purpose of standardization.At all-time during the color measurements, the specimens were cleaned and dried using an absorbent paper napkin.The factors studied are the type of resin base material and the type of chemical denture cleanser.The study outcome was the color change (ΔE) observed from baseline to final measurements.The color measurement of the specimens was recorded by a bench-top UV light visible spectrophotometer (Color Eye 7000A, X-Rite, Grand Rapid, Michigan, USA) in the 3-dimensional Commission Internationale de l'Eclairege L * a * b * (CIELab) color space system (figure 1(d)) in the wavelength range of 360-740 nm.The CIELAB coordinates (L * , a * and b * ) of the acrylic resin discs were measured relative to D65 standard light source illuminant corresponding to average daylight [44].The mean of the L * a * b * values at baseline and final color measurements for each specimen was determined and the color changes were determined using the following equation (equation ( 1)): Where ΔE represents the amount of color change, ΔL * coordinate represents lightness and darkness on a scale of 0 (black) to 100 (white); Δa * and Δb * indicates chromatic scale: Δa * coordinate represents redness (positive direction) or greenness (negative direction), and Δb * coordinate represents yellowness (positive direction) or blueness (negative direction) of the surface.
Statistical analysis
Data were analyzed using SPSS 24.0 version (IBM Inc., Chicago, USA) software for windows.Mean and SD was used to describe the quantitative outcome variables (color, ΔE).A two-way analysis of variance was used to quantify the effect of type of chemical cleansers and material type on ΔE values.Post-hoc Tukey's test was used for pair-wise mean comparison of ΔE among the three types of chemicals and materials (α<0.05).
Results
The mean and standard deviations of ΔE values are presented in figure 2. The highest (29.69±1.84)and lowest mean (19.03±8.78)ΔE was observed with light-cured materials and CAD/CAM materials immersed in 0.2% CHG, respectively.Among the materials tested, Tukey's post hoc test showed that heat-cured resins immersed in either of the denture cleansers did not show any significant differences (p>0.05) in the mean ΔE values.Similarly, light-cured resins also showed non-significant changes in mean ΔE values for the denture cleansers used.On the contrary, CAD/CAM materials in any of the denture cleansers demonstrated significant ΔE values between the tested cleansers (p0.05).Table 1 presents the mean±SD ΔE values with respect to the denture cleansers used.The mean ΔE values with denture cleansing tablet (Corega) dissolved in 250 ml water with CAD/CAM Acrylic resin discs was statistically significantly higher compared with the mean ΔE values with the other two types of cleansers with the same CAD/CAM acrylic resins.There was a significant difference between mean ΔE values of the two denture cleansers, 0.2% CHG and 5.25% NaOCl with CAD/CAM acyclic resins.
Also, there was a statistically significant difference in the mean ΔE values between the denture cleanser (Corega) and the other two cleansers with light-cured acrylic resins.Moreover, the mean ΔE values of denture cleansing tablet (Corega) is significantly lower than the mean ΔE values of 5.25% NaOCl and 0.2% CHG but no difference was seen between these two chemicals with light-cured Acrylic resins.The mean ΔE values of three denture cleansers are not significantly different from each other, with the use of heat-cured acrylic resins.
The multiple pair-wise comparisons of mean ΔE among the three resins showed a statistically significant difference between heat and light-cured resins (p=.007),and CAD/CAM and light-cured resins (p=0.001).On the contrary, no significant difference was observed between CAD/CAM and heat-cured resins (p=1.00)(table 2).
The pairwise comparison of mean ΔE values showed non-significant differences among the denture cleansers used (p>0.05)(table 3).
Two-way analysis of variance (ANOVA) for the effect of the material type and denture cleansers on the ΔE values is presented in table 4.There was a statistically significant difference in the model with one outcome variable, ΔE and two covariates; material type and denture cleansers type (F=4.476;p<0.0001).In the model, material type is statistically significant (F=7.892;p=0.001)but denture cleansers type is not statistically significant (F=1.733;p=0.177).Hence the mean difference of ΔE is statistically significant among the material type but not across the denture cleansers used.However, there was a statistically significant interaction between the combination of materials and denture cleansers (F=4.890;p=0.001).
Discussion
In the current study, the newly developed pre-polymerized CAD/CAM acrylic resins were compared with conventional heat polymerized, and light polymerized acrylic resins.The tested denture resin materials showed significantly different ΔE values following exposure to combined methods of denture cleansing.The study hypothesis that the color stability of newly developed pre-polymerized CAD/CAM acrylic resins would demonstrate superior color stability compared to conventional acrylic resins was partially rejected.Among the three denture cleansers used, the CAD/CAM resin material showed significantly higher values when immersed in Corega.On the contrary, when immersed in CHG and NaOCl, the CAD/CAM materials showed significantly lower ΔE values compared to conventional denture resin materials.
In the assessment of color stability, visual evaluation is a very subjective procedure in terms of the physiology and psychology of the evaluator.Contrarily, the spectrophotometer used in the present study not only abolishes the subjectivisms but also allows identification of even the minor color alterations [45].The CIELab is a color system with a scale that contains all colors visible to the human eye, and it's used to evaluate perceptual color changes in dental materials [46].Instrumental color value readings also have an advantage over subjective visual color readings since instrumental values are objective, quantitative, and more quickly available [47].
In dentistry, the CIELab 50:50% perceptibility threshold (PT) is reported to be ΔE=1.2,whereas the 50:50% acceptability threshold (AT) is reported to be ΔE=2.7 [48].All denture base materials in this study showed significantly high ΔEab values that were clinically unacceptable after mechanical cleansing, and immersion in chemical cleansing agents.There are intrinsic and extrinsic factors that might lead to color change of denture base materials [3,14].These factors include physio-chemical change, the residual monomer used, water sorption and the surface roughness [10].Among the materials, pre-polymerized PMMA blocks demonstrated better color stability when immersed in CHG and NaOCl.This is due to the fact that these materials are known to be highly condensed resin that has less porosity, superior polishability, and less water absorption compared to conventional acrylic resins because they are manufactured under extreme heat and pressure conditions [5].The outcome of this study is in agreement with previous studies that have demonstrated better color stability of CAD/CAM compared to conventionally fabricated acrylic resins [10,16,17].
Contrary to CAD/CAM materials, light-cured acrylic resin materials demonstrated significantly higher color changes compared to other study materials.This is consistent with the outcome of the studies by Zuo et al [3], Dayan et al [10] and Hollis et al [41] who demonstrated clinically imperceptible color changes with lightcured acrylic resin materials.The authors concluded that it could be due to the high water sorption tendencies of the light-cured resins as compared to the other materials [3,10].The authors also demonstrated that color changes are of varying degrees and is increased with prolonged storage in chemical cleanser solutions.During prolonged immersion, the monomer leaches out and water is absorbed [12].The urethane dimethacrylate (UDMA) based Eclipse light-cured material used in this study is also prone to hygroscopic expansion due to two hydrophilic urethane groups within its molecular structure [49].However, few authors report that water absorption by denture base acrylics tends to stabilize approximately after 28 days [5,50].These findings were taken into consideration in the current study, which resulted in significant color change in the light-cured acrylic group.The immersion process in this study simulated the overnight or nocturnal immersion of 8h a day.Accordingly, 24h corresponded to three immersions of 8 h and the immersion process was repeated for a period of 2,880 h simulating one year of prosthesis use.This also could be one of the possible reasons for imperceptible color changes of all denture base acrylic resin materials in our study.Denture base polymers are prone to color-shift if the cleansing solutions are not handled rightly [51].The whitening of the denture materials can be attributed to the increased temperature of the water applied in the solution [39,52,53].Studies have demonstrated that the number of microbes on dentures after chemical disinfection is of a fewer count compared to brushing [36].However, brushing showed to be more efficacious at removing plaque [36,54].Therefore, it is recommended to take advantage of both mechanical and chemical cleansing methods [36,55].However, the applied cleansing method must be based on its effectiveness against the microorganisms as well as the characteristics of the denture base materials [56].
We used power toothbrushes with a rotation action in this study because they have been shown to be superior, with findings showing greater plaque removal, as compared to manual brushes.Furthermore, patients have shown that power toothbrushes are well received and thus have the ability to increase compliance [57].These findings suggest that some special-needs patient populations, such as the elderly and disabled, may benefit from the use of powered toothbrushes [58].Routinely, brushing with toothpaste for 2 min twice a day is recommended, which means a given dental surface may only be in contact with the toothbrush for a maximum of 5 s twice daily.Subsequently, in this study, one hour of brushing equals a year of life for a tooth surface [59].
Several studies have examined the effects of chemical cleansers on the physico-chemical properties of denture base materials [28,36,52,60,61].Among the chemical denture cleansers used, Corega demonstrated increased staining of the CAD/CAM denture materials.The possible reason for the increased stainability of Corega solutions could be related to the deleterious combination of oxidation and strong alkaline solution [62].However, the conventional materials immersed in either CHG and NaOCl showed significantly higher color changes compared to Corega solution.While NaOCl is used for disinfection and plaque control, it has been stated to have drawbacks due to the possibility of whitening through oxidation reaction, which is considered the most significant disadvantage [63].Furthermore, the concentration of NaOCl and immersion time could also affect the color change [64].
Similar to NaOCl, the staining potential of CHG is also dependent on its concentration which ranges between 0.2% and 0.12%.However, Bagis et al report that the staining effect of CHG can be reversed by means of conventional cleaning regimens using bicarbonate or abrasive prophylaxis paste [65].Furthermore, the differences in the ingredient in the solution, such as sodium carbonate and percarbonate could also contribute to significant color changes of materials immersed in different solutions.However, contrary to the findings of the present study, Sato et al [60] did not reveal any color changes in the denture base acrylic materials with the use of chemical agents.This may be due to the shorter immersion period (equivalent to 30 days) as well as the lack of instrumental color values detection.The color changes in denture base materials are time-dependent and increase with prolonged immersion [3,10].
This study has a few limitations.First, the fact that it is an in-vitro study, the results prerequisites that need to be tested in in-vivo trials, especially considering the cleansing effect of saliva on the color changes.Nevertheless, in-vivo studies are considered to be more difficult to conduct.Although the color change of denture base materials examined in an in-vitro methodology may not be as valid as those obtained through in-vivo methods, they can provide useful enlightenment for clinical practice.In the face of the limitations of this study, it gives valuable evidence in regard to the color change of recently introduced pre-polymerized CAD / CAM denture base acrylic and the effect of various, commonly used cleansing agents.Exposure to different agents may cause various degrees of color change.Furthermore, the effectiveness of polishing in clinical practice is operatordependent, and different polishing techniques may produce varied results.Further research into the effect of different polishing processes and immersion in different staining solutions on the optical characteristics and surface roughness of denture base mater should be conducted based on the findings of this study.
Conclusions
Within the limitations of this in-vitro study, the color stability of the pre-polymerized CAD / CAM acrylic discs is comparatively better than the conventional acrylic resin material.The color changes of all the tested materials were clinically imperceptible, regardless of the type of denture cleanser used.Among the materials tested, the color stability of the light-cured acrylic resin was low.All chemical disinfectants used in this study had an effect on color stability, with the Corega tablets having the most significant effect.
Figure 1 .
Figure 1.Study Procedure: (a) Numbered and notched acrylic resin specimen; (b) mechanical brushing with customized holder and electric toothbrush; (c) Specimen stored in containers; (d) Specimen placed in the port of the spectrophotometer for L * a * b * readings.
Figure 2 .
Figure 2. Mean and SD of the ΔE values ( * significant difference between Corega and CHG groups; ** significant difference between Corega and NaOCl groups: *** significant difference between Corega and NaOCl groups of the CAD/CAM materials) (p0.05).
Table 1 .
Comparison of mean ΔE values of the three types of denture cleansers across the acrylic resin type.significantly higher than heat-cured resins but not different from light-cured resins; no difference between heat and light-cured resins; c Significantly higher than CAD/CAM resins but not different from heat-cured resins; no difference between heat cured and CAD/CAM resins; d Significantly higher than CAD/CAM resins but not different from heat-cured resins; no difference between heat cured and CAD/CAM resin.
a statistically significant (Tukey's post-hoc test); b
Table 2 .
Pair-wise comparison of the effect of the resin material type on the mean ΔE values.
*Statistically significant at p.05.
Table 3 .
Pair-wise comparison of the effect of the denture cleansers type on the mean ΔE values.
Table 4 .
Two-way ANOVA for the effect of the material type and denture cleansers on the ΔE values.
* Statistically significant at p.05. | 5,374.6 | 2021-08-01T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
An Advanced AFWMF Model for Identifying High Random-Valued Impulse Noise for Image Processing
: In this study, a novel adaptive fuzzy weighted mean filter (AFWMF) model based on the directional median technique and fuzzy inference is presented for solving the restoring high-ratio random-valued noise in image processing. This study aims, not only to obtain information from each direction of the filtering window, but also to gain information from every pixel of the filtering windows completely. Thus, in order to implement preserving details and textures for better restoration in high-noise cases, this study utilizes the directional median to build the membership function in fuzzy inference dynamically, then calculates the weighted window corresponding to the filtering window using fuzzy inference to represent the importance of valuable pixels. Finally, the restoration pixel is calculated using the weighted window and the filtering window for the weighted mean. Subsequently, this new AFWMF model significantly improves performances in the measurement of the peak signal to noise ratio (PSNR) value for preserving detail and fixed image in noise density within the range of 20–70% for the five well-known experimental images. In extensive experiments, this study also shows the better performance of identifying the proposed peak signal-to-removal noise ratio (PSRNR) and evaluating psycho-visual tests than other listed filter methods. Furthermore, the proposed AFWMF model also has a better structural similarity index measure (SSIM) value of another indicator. Conclusively, two interesting and meaning findings are identified: (1) the proposed AFWMF model is generally the best model among the 10 listed filtering methods for image processing in terms of the measurement of two quantitative indicators for both the PSNR and SSIM values; (2) different impulse noise densities should be made for different filtering methods, and thus, this is an important and interesting issue when aiming to identify an appropriate filtering model from a variety of images for processing various noise densities.
Introduction
Impulse noise frequently contaminates digital photographs. A sensor or a noisy transmission medium can pollute the source, making it inaccurate [1,2]. When an image is polluted, this means that some parts of the image are replaced by impulse noise [3,4]. There are two forms of noise in general. One form of noise is known as salt and pepper noise. It has a fixed value at both ends of the gray scale. In 8-bit grayscale, pixels contain noise values of 0 and 255. Another type is called uniform impulse noise or random-valued impulse noise. This noise will disperse its pixel values from 0 to 255 equally [5]. There have been several techniques used for removing salt and pepper noise, and some of them have proven to be effective [6][7][8][9]. However, many academic researchers are still interested in developing a random-valued impulse filter.
Many spatial filters have been developed for removing impulse noise. The most powerful spatial filter is the median filter [1,10,11]. By using this filter, we can remove the impulse noise from random values. The median filter replaces the filtering window's centered pixels with the filtering window's median. This filter applies a median operation for every pixel. Therefore, it modifies both undisturbed pixels and pixels with noise, and some detail which must be preserved in an image is removed [1]. Some modified median filters, such as the weighted median filter [12] and the center-weighted median filter [13], have been implemented to solve this problem. These two filters can operate effectively at low noise ratios but not at high noise ratios, since their performance in terms of spurious and missing detections is constrained [12]. This is resolved by using a switching median filter [13,14]. This filter is mainly used for noise median filtering and detection mechanisms. A threshold is used in the detection stage to detect whether the data are tainted [12]. The filtering process is executed only if the filter contains noise.
Later, various methods for reducing random-valued impulse noise were devised, such as the tri-state median filter [15], the directional weighted median filter [16], the modified switching median filter [17], the fuzzy-reasoning median filter [18], the simple adaptive median filter [19], the efficient decision-based algorithm [20], neural networks and fuzzy decisions [21], the elimination of impulse noise using an efficient algorithm [22], and threshold Boolean filtering [23], but these methods can fail at noise ratios over than 50 percent when using ranking order absolute difference [3,4] and adaptive switching median filters [12]. The above filters apply the standard median filter at the filtering stage. In general, the median filter could be poor when used with high noise ratios of over 50%. As a result, the adaptive fuzzy weighted mean filter (AFWMF) model is proposed to be used in this study for reducing and overcoming this problem, as it does not rely primarily on the median operation to replace noise.
The AFWMF model uses a directional median to preserve detail and calculate the weights of pixels in the filtering window with fuzzy rules. Each weight is represented as a pixel, which should provide useful information for restoration to enhance filtering performance. Fuzzy inference membership functions can change dynamically depending on the distribution of the filter window to manage random-valued impulse noise. As a result, it is expected that the AFWMF model will enhance filtering performance. In addition, the detection of the switching framework will affect the filter stage of the same framework directly. Therefore, this study modifies the ROR (robust outlyingness ratio) detection algorithm, using fuzzy rules to make the ROR algorithm more precise and powerful.
This study is structured in the following order. Section 2 reviews the ROR detection algorithm and filter order. Section 3 introduces the AFWMF model in detail. Section 4 summarizes the proposed algorithm of the AFWMF model. Section 5 contains the empirical results obtained, illustrating the accuracy of the novel method using multiple testing photos. Finally, conclusions are drawn in Section 5.
Related Work
This section reviewed some related works in this study, including filtering windows and tag windows, robust outlyingness ratio, sparsity ranking, and noise models.
Filtering Window and Tag Window
For handling our proposed method, a filtering window and a tag window were defined as follows. A matrix with the size (R× ) was built for the input image. The filtering window was a square matrix. It had odd (2 + 1) × (2 + 1) dimensions. It is represented as follows: where 1 ≤ ≤ , 1 ≤ ≤ , ≥ 1.
The corresponding binary tag matrix is the same size as . The sub matrix of is called tag window , ; its central pixel is at location ( , ), the same as , on the image, and it is formatted as: When , in Equation (1) was detected as a noisy pixel, , in Equation (2) was set to one; when , was not detected as a noisy pixel, , was set to zero.
Robust Outlyingness Ratio
In order to be more robust and effective, we used a type of coarse-to-fine strategy based on the iterative framework for detecting random-valued impulse noise [12,16,[24][25][26]. We used ROR to estimate whether the current pixel was a noise or not [16]. We briefly reviewed this algorithm as follows: • Coarse stage: 1. Let us consider parameters = 1 and , , as coarse thresholds. We initialized all of them with zeros in the tag matrix. 2. For every pixel in the image, we found its ROR; if it ranged in the fourth level, as shown in Table 1, then that pixel was noise-free. We set the tag to 1; else, we found the relative divergence d among the filtering window's median and active pixels. Then, based on the ROR value of d, we compared it to . We checked whether it was noisy or noise-free. We checked if d was bigger than . According to the r, we updated the tag matrix. 3. If ≤ , = + 1, then we repeated Step 2; else, this stage was completed.
• Fine stage: 1. Let us consider parameters = 1 and , , , as fine thresholds. We initialized all of them with zeros in the tag matrix. 2. For every pixel in the image, we found its ROR; if it ranged in fourth level, as shown in Table 1, then that pixel was noise-free. We set the tag to 1, or else found the relative divergence d among the filtering window's median and active pixel. Then, based on the ROR value of d, we compared it to . We checked whether it was noisy or noise-free. We checked if d was bigger than . According to the r, we updated the tag matrix. Similarly, we calculated the values for all the pixels. 3. If ≤ , = + 1, then we repeated Step 2; else, this stage was completed. and are shown in Table 2. For the ROR algorithm, the operation was carried out in two stages-i.e., detection followed by filtering. For detection, first the ROR was measured to find the impulse of each pixel; then, all the pixels were divided into four levels according to their ROR values.
Second, different decision rules were used to detect the impulse noise based on the absolute deviation from the median in each cluster. In order to make the detection results more accurate and robust, the from-coarse-to-fine strategy and the iterative framework were used for filtering. The structural flowchart of ROR is shown in Figure 1.
Sparsity Ranking
This study uses a method to determine filter order called Sparsity Ranking [27]. Its main idea is that it first selects the region to fix where the noisy ratio is lower, then it better handles more noisy ratio regions later. As the lower noisy ratio region has more uncorrupted pixels, it is easier to restore. Then, there are more useful pixels in the low ratio area for handling noise in more noisy regions.
Noise Model
When the image is polluted, part of the image will be replaced by impulse noise of a random value. The probability of this noise determines the degree of image deterioration. The noise model is given in Equation (3) below.
where is the noise ratio or probability of noise. , is the pixel value at i row and j column. , and , are pixel intensities of gray-scale images at position ( , ); they correspond to the original pixel and the noisy pixel, respectively. The pixel intensity of , is between maximum and minimum ([ , ] in an 8-bit gray-scale image, = 0, = 255). As a result, there are two forms of noise. One form of noise is known as salt and pepper noise. It has a fixed value at both ends of the gray scale. The other type is uniform impulse noise, which has a uniform distribution at [ , ] and is also known as random-valued impulse noise. The second sort of noise was examined in this research.
The Proposed Adaptive Fuzzy Weighted Mean Filter Model
The proposed AFWMF model can be divided into three stages. The first stage is the proposed detection method, which combines ROR and fuzzy rules to build a corresponding tag matrix. In the second stage, we use a filter order method called sparsity ranking [27]; its main idea and process have been discussed in the previous section. The last stage uses fuzzy rules to filter corrupted pixels according to a tag matrix from the first part, while the filtering order is from the second part. Figure 2 represents the system flowchart of the proposed model, and its key concepts and algorithm of main components are described in the next three subsections.
Fuzzy ROR Noise Detection
First, the stop iteration condition in the original ROR algorithm is defined as Equation (4).
where the represents the peak signal to noise ratio (PSNR) of the current iterative process, and represents the PSNR of the previous iterative process. These two PSNRs are compared with the original image; however, the process of restoration does not recognize the original image or the noisy image. Therefore, this study modifies PSNR for stopping iteration. This new stopping condition is called the peak signal-to-removal noise ratio (PSRNR); it can be calculated using fixed images of the previous iteration and the current iteration. PSRNR is defined by extensive experiments and shown as Equations (5) and (6): where is the iteration number, , is the image restoration pixel value in the previous iteration, , is the image restoration pixel value of the current iteration, is the current , and is the previous . By the above criteria, the noise detection skill is named the No-Reference ROR (NR-ROR) algorithm in this study.
Noise Cancellation by Adaptive Fuzzy Directional Weighted Mean
The algorithm we propose is akin to a switching median filter. This filter computes the local image from each picture pixel, rather than only utilizing the median to replace the noise, which is the main difference between our proposed novel method and this filter. More precisely, it utilizes standard deviation and fuzzy inference to estimate the restoration pixel value of the filtering window, where the output of fuzzy inference represents the importance of a pixel. For the above reasons, this filter method is called the AFWMF model in this study. The filtering started after the filtering order was determined as follows: let , ( ) ( = 1 4, = 2) be a set of four filtering windows directing pixels at one place (i,j). This is defined as follows: Accordingly, the standard deviation of four filtering window directions is calculated as shown in Figure 3, and the standard deviation measures the degree of data dispersion [28].
where , ( ) is the mean value of , ( ) and represents total number of elements in , ( ) .
Next, minimum standard deviation is formatted and given as follows: where , means that one of the four directions is closest to each other in the same direction. Therefore, the central pixel value of the window should also be close to them in order to preserve the image detail or texture. For preserving detail or texture, we obtain a reference median from , called , . It is formatted as Equation (10) below.
The standard deviation , of local windows is defined using Equation (11) below.
where , is the mean value of , . Afterwards, we form membership functions. There are three membership functions corresponding to three input fuzzy sets; they are nosie1, no-noise, and noise2, respectively, and are pictured in Figure 4. In Figure 4, the horizontal axis is a pixel value. On the vertical axis, the grade of membership of the element x in the fuzzy set is shown, with values ranging from 0 to 1. These three membership functions are three input fuzzy sets. They are formatted as Equations (12)- (14) below, respectively.
, is a reference value that can retain the details of the picture. , represents the grade of dispersion of the filtering window. Pixels of the window farther from , show that the importance of the pixel is lower; conversely, closer pixels indicate a higher importance. Then, there are two membership functions corresponding to two output fuzzy sets: low and high sets; these are displayed in Figure 5. In Figure 5, the horizontal axis is the importance value, and it ranges from 0 to 1. On the vertical axis, we have the membership function of a fuzzy set denoted by x, and this axis has values from 0 to 1. These two membership functions are output fuzzy sets. They are defined as Equations (15) and (16) below, respectively.
In this study, this characteristic is combined with fuzzy inference to decide the weight corresponding to the pixel. The output (the weight corresponding to the pixel) of the fuzzy inference of the proposed AFWMF model is based on three rules of filter stage as follows: Additionally, we can use the above rules to obtain weights. Subsequently, the restoration pixel value * is calculated and formatted as Equation (17) where is the weight corresponding to the pixel of filtering window, = 0 1, ∈ , , , = { , | − ≤ ≤ + , − ≤ ≤ + } , ∈ , . * is the restoration pixel value, which is regarded as the final output.
In order to preserve details or texture in image and overcome the weakness of median filters, we utilize weights from fuzzy inference. According to Equation (17), the filtering window's pixel values were multiplied by their associated weights to determine which degree of the pixel should be used to filter in the filtering process. The weight of pixel represents the degree of the , with its immediate surroundings; as a result, it will be utilized to regulate the level of restoration that takes place. Pixel weight indicates the level of importance of the pixel, and this characteristic decides how much useful information is needed for restoring.
Finally, the restoring pixel value is obtained with the weighted mean as the final output and shown as Figure 6, in which the number with an underline means that the pixel is corrupted by noise. Figure 7 shows the noise corruption for a light region. Figure 8 shows the member function according to Equations (10) and (11) for shifting their location. Figure 9 shows the noise corruption for a dark region. Figure 10 is the member function according to Equations (10) and (11) for shifting their location.
Algorithm of the Proposed AFWMF Model
The algorithm of proposed AFWMF model is summarized in detail below; this is an improved and enhanced version of the ROR algorithm.
•
The enhanced coarse stage includes the following six steps, as follows: 1. Choose parameters for = 1; initialize the flag matrix to all zero.
2. For every pixel in the image, find its ROR, the relative divergence d among the filtering window's median, and the active pixel. 3. Use the coarse stage of ROR described in Section 2.2 to detect the noise in the active pixel. Good and noisy pixels are represented by zeros and ones, respectively. 4. Following the use of Equations (7)- (14) to build the input membership function and according to three-rules-based filter stage described in Section 3.2, we obtain the pixel weights in a filtering window. 5. By obtaining the value of restoring pixels from Equation (17), let = + 1.
6. Use Equation (5) for judging and stopping the enhanced coarse stage; otherwise, go to Step 2.
• The enhanced fine stage includes the following six steps: 1. Choose parameters for = 1; initialize the flag matrix to all zeros.
2. For every pixel in the image, find its ROR, the relative divergence d among the filtering window's median, and the active pixel. 3. Use the fine stage of ROR described in Section 2.2 to detect the noise in the active pixel. Good and noisy pixels are represented by zeros and ones, respectively. (7)- (14) to build the input membership function. According to the three rules-based filter stage described in Section 3.2, obtain the pixel weights in the filtering window. 5. By obtaining the value of restoring pixel from Equation (17), let = + 1.
Use Equations
6. Use Equation (5) for judging and stopping the enhanced fine stage; otherwise, go to Step 2.
Experiment Results and Discussions
Initially, the estimate matrix for filters, including the mean square error (MSE) and the PSNR, is defined and determined using Equations (18) and (19), respectively. The MSE can be used to compute the PSNR, and the PSNR is a quantitative indicator for evaluating the performance of different filters. Importantly, the smaller the value of MSE is, the better the performance is. Conversely, the PSNR is bigger for a better performance. Moreover, this study uses another quantitative indicator, structural similarity index measure (SSIM), for further measuring the filtering performance, and its formula is defined as Equation (20) below. The SSIM value is an index that is mainly used to predict the perceived quality of digital images and measure the similarity between two images. The difference between the PSNR value and the SSIM value is that the former method estimates absolute errors, and the latter has the structural information that the pixels have strong inter-dependencies when they are spatially close to each other. More importantly, both the PSNR and SSIM values are effective evaluation standards from past studies.
= 10 255 , where R and C represent the dimensions of the image and × is the size of an image. O represents the original image and Y is the image that has been restored. The SSIM gives the similarity measurement between the original and restored image as in Equation (20), where and are the mean intensities of the original and restored images, and are the standard deviations of the original and restored images, is the covariance of the original image and restored image, is the dynamic range of the pixel values, and and are regularization parameters. To further verify the performance of the proposed AFWMF model, five famous testing pictures, including Lena, Peppers, Boat, Gold Hill, and Plane, respectively, are used in this study for the image-processing experiments. They are pictured in Figure 11, which was generated by an 8-bit grey level. All their resolutions are 512 by 512. In order to make the experiments more general and precise, this study conducted two experiment evaluations. One compared the PSRNR of the AFWMF model with the PSNRs of different filters; the other compared the psycho-visual result of the AFWMF model with the psycho-visual results of different filters. The different filters are addressed in BDND [9], DWM [16], EAIF [22], FRDFM [18], SBF [3], SDOOD [29], ROR [25], ASMF [12], and EBDND [30], since they were all identified to have good performances in the literature reviews in this area. The experimental results are described with measurement indexes in the next four subsections, and these results and the different filters are discussed in the last subsection.
Restoration Performance Measurements
This evaluation standard of the performance test uses Equation (19), and the Lena image is first used to compare the proposed AFWMF model with other filters proposed by other studies. The characteristic Lena image has an approximately normal distribution; that is to say that the detail, flat regions, shading, and texture are all included. Thus, the Lena image is an excellent comprehensive testing image for assessing restoration results and comparing restoration performance. Table 3 shows the comparison of the PSNR measurement values in the Lena image; the noise is set from 10% (i.e., 0.1) to 90% (i.e., 0.9). (Note: to lower the overabundance of figures and tables, this study only uses the empirical results of the Lena image as a representative of the demonstration here, but it uses a summary report of a table for completely concluding the results of the five images in this subsection.) From Table 3, it can be seen that the AFWMF model has a slightly worse PSNR value than that of the DWM and ROR filters in the low 10% noise density instance and high 80-90% noise density instance. However, the AFWMF model's PSNR value apparently outperforms the other filter techniques at higher ratios of 20% to 70% noise densities. Importantly, when the ratio of noise density is within 20-70%, the proposed AFWMF model significantly outperforms all the existing filters selected and used for the Lena image in this study. For processing the Lena image, it is ranked in the order AFWMF > DWM > ROR > ASMF > SDOOD > SBF > FRDFM > EAIF > BDND > EBDND in PSNR performance. The proposed AFWMF model is the best filtering model used for the Lena image. For further evaluating the performance of all the five images used in this study, Table 4 and Figure 12 show the additional comparisons of the PSNR average measurements for noise of 10% to 90%. From the above table and figure, this is ranked in the same order as the Lena image of AFWMF DWM ROR ASMF SDOOD SBF FRDFM EAIF BDND EBDND in terms of the performance of the PSNR average value for the five images. Thus, this is effective evidence that the proposed AFWMF model has the best filter technique in this study. Furthermore, Table 5 presents the best filter model at different noise densities in the PSNR average value for the five experimental images. It is found that it is best to use the DWM model for a low 10% noise density, the ROR model for high 80% to 90% noise densities, and the AFWMF model for 20% to 70% noise densities. (2) 13 (1) 4 4 The bold refers to the best case.
Psycho-Visual Performance Comparison
In order to evaluate the psycho-visual performance in detail preservation, this experiment uses the real restoration image as the testing target to measure different filters for all the five images used. The noise is also set in ratios of 10-90% noise densities, but the 60% random-valued impulse noise here is selected randomly and contaminated in the first figure for the result presentation of the psycho-visual test in this study. Thus, the experiments are run for evaluating the restoration images at 60% noise density (note: due to the need to cut down the complexity of excessive figures and tables, this study only displays the empirical results of 60% noise densities as a representative figure presentation for the five images in this subsection). As a result, Figures 13-17 show the measurement results for the psycho-visual performance comparison in these images for assessing different filtering techniques. Obviously, the proposed AFWMF model is still preferable when compared to the other filtering models by personal observation; however, in order to identify the testing results of psycho-visual comparison, this study invited three experts who specialize in this image processing field to help with the visual evaluation. In particular, Table 6 shows the average ranking of the performance comparison for a noise of 60% made by the three experts. In Table 6, it is indicated that the proposed AFWMF model is still the best one and has a better performance in psycho-visual testing for a ratio of 60% impulse noise density. From Table 6 and these figures, it is clear that the top three models are AFWMF, ROR, and DWM in terms of rank ordering. It is shown that the proposed AFWMF model has better performance than other filtering models in terms of psychovisual comparison and recognition for various testing points, such as shading, flat regions, detail, and edge, from the experts. In addition, to further evaluate the key impact of the 10 filtering models used for noise interference processing, this study also further describes different levels of noise density for the image processing to improve the visual evaluation of the three experts. They again ranked the images restored by the 10 filters from 10% to 90% of interference for the five experimental images. Subsequently, the average ranking of the performance by the same three experts was calculated and is shown in Tables 7-9 for different levels of noise: 30% (low), 50% (median), and 70% (high), respectively. From Tables 7-9, it is also clear that the proposed AFWMF model performs the second best at a 30% noise density and is the best one in both the 50% and 70% noise densities. In general, the proposed AFWMF model still has the best performance in psycho-visual comparison for the five images compared to the other listed filtering models both generally and comprehensively. Productively and meaningfully, the proposed AFWMF model has a better performance than the other listed filtering models in terms of psycho-visual comparison, regardless of the level (low, medium, or high) of noise density involved. Thus, extra experiments are further needed to provide this evidence in other ways or in other images. Boat Peppers Plane Count Rank (for Ratio 70%) BDND 9 10 9 9 9 46 9 DWM 4 4 3 3 6 20 4 EAIF 8 8 8 8 8 40 8 FRDFM 6 5 5 6 4 26 6 SBF 7 7 6 7 7 34 7 SDOOD 3 2 6 4 3 18 3 ROR 2 3 1 1 2 9 2 ASMF 5 6 4 5 5 25 5 EBDND 10 8 9 10 10 47 10 AFWMF 1 1 2 2 1 7 1
Stop Iteration Comparison
It is important to understand the iterative condition of the related ROR algorithm under the picture (or image) and its noise density. In this experiment, this study can interestingly verify the new stop criteria to stop iteration under the original image and noise density unknown and avoid the picture being over-repaired (i.e., the detail texture is removed). Regarding the environment of iteration, the stop condition is set up under the same ROR iterative architecture and the median filter is used to filter out the noise. The iteration number of the original ROR is shown in Table 2, which is based on the original image and the noise density. Referring to Equations (4)-(6), (18) and (19), the number of iterations for the new conditions is provided in this study. Furthermore, the noise density is selected at the low, medium, and high noise levels (30%, 50%, and 70%, respectively) from Table 2 using the same five images.
As a result, the average PSNR of the five images for the verification results is shown in Tables 10 and 11, referring to Table 2 and using Equations (5) and (6), respectively. Table 10 uses the number of iterations estimated from the original ROR, while the stop condition using the PSRNR formula is obtained under the principle of an unknown original image and no noise information in Table 11. In the tables, it is shown that the comparable results of the PSNR value for the proposed AFWMF model are very close to the original ROR method, which uses the known noise density conditions. Therefore, the experimental results show that the proposed AFWMF model can effectively support the stop criterion for the iteration and obtain a set of optimal PSNR values that is very close to the known information conditions of the original image. Afterwards, it is also necessary to compare the PSRNR algorithm with the original ROR algorithm in different parameters to its noise density in Table 2. In fact, the noise density and the original image are not known when the noise is filtered out, and it is not possible for the iteration number to refer to Table 2 from the original ROR method. Therefore, for the experimental fairness of the testing results, the implementing process was tested under the same ROR architecture and the same noise density as that in Table 2. That is, the noise density is also selected in the same low, medium, and high noise levels (30%, 50%, and 70%) for performance comparison. The experimental results are shown in Tables 12-16 for the five images. It is found that in most cases (about 10/15 = 0.67), the empirical result can be very close to the repair effect found in Table 2, where the proposed PSRNR method is used as the stop condition. Thus, the proposed AFWMF model still conclusively has the best result out of all the filter techniques used in this study.
Performance Comparison of SSIM
Afterwards, to further deeply reidentify the performance of the 10 filtering models, this study uses the second measurement indicator (i.e., SSIM value) for processing the noise density of images. Thus, Table 17 and Figure 18 show the SSIM average value at different noise densities from the 10 filter models for the five experimental images. Regarding the empirical results, it is found that the ranking order is AFWMF ASMF DWM ROR SDOOD SBF FRDFM EAIF BDND EBDND in terms of the performance of the SSIM average value. From Tables 4 and 17, two interesting and meaningful findings are defined and determined. (1) The proposed AFWMF model is the best one for processing impulse noise density regardless of the PSNR and SSIM values.
Discussions for the Related Results
From all the experimental results, the study raises some questions related to the different filtering models in the following two directions: (1) The AFWMF model is the best method for denoising signals in almost all noise levels to the Lena, Gold Hill, and Boat images; however, the performance in the Peppers and Plane images is slightly worse in terms of some ratios of noise density when compared to the DWM and ROR models. The possible reasons for the issues mentioned above are that the AFWMF model improves its filtering performance using a more complicated and varied feature, form, or style than that of the other listed filtering models mentioned in this study. (2) The relationship between PSNR and noise density in the 10 listed filter models and the advantages/disadvantages or characteristics of the 10 filter models are discussed in the following: (a) The BDND filter is proven to operate efficiently even under high noise densities. However, the critical parameter needs to be defined in the filtering step of the BDND algorithm. (b) The DWM is based on the differences between the current local pixel and its neighbors aligned with four main directions. Then, the minimum of these four direction indexes is used for impulse detection. It makes full use of the characteristics of impulse and edges to detect and restore noise. DWM has an excellent performance in low-noise conditions. (c) The EIAF algorithm consists of two steps: impulse noise fuzzy detection and impulse noise cancellation. The EAIF has a slightly better performance in PSNR evaluation with 10% and 20% noise. It ranks eighth out of ten models. (d) FRDFM generally has a better performance at a higher noise percentage. For example, under 80% and 90% noise, it ranks fifth among the 10 model methods. However, it only ranks seventh in terms of its overall ranking. (e) SBF ranks sixth overall and has an average performance at various noise percentages. (f) SDOOD ranks fifth, but he ranks poorly in low noise and better in high noise conditions, ranking third with 90% noise. (g) The ROR method does not necessarily have a good performance in the state of low noise, because ROR is an improvement mechanism for DWM under high noise, so it has a better performance under high noise. It is in third place. (h) The method of using ASMF has a good performance under various noise conditions. Its overall ranking is fourth. (i) The EBDND is used for some special cases, so it is more difficult to define the critical parameters. Thus, it does not have a good performance without the definition of special parameters. It has the worst performance in terms of PSNR evaluation. (j) The proposed AFWMF method uses the combined benefits of ROR, DWM, and others, so its overall average has the best performance.
Conclusions
This study has proposed a new filtering framework called the AFWMF model. We proposed four key points and concerns relating to the image processing abilities of this new model. (1) First, this study transforms the original ROR algorithm to an expert system of fuzzy inference, since the use of fuzzy sets of fuzzy inference is a multi-value concept. Thus, the original ROR algorithm is combined with fuzzy inference in the proposed model; then, the empirical results show that the fuzzy ROR can be used more precisely to find noise from pictures (images). (2) Second, this study proposes a new stop condition of PSRNR. In the literature [12,16,25], the noise density and original image are used as references or standards for stop iteration; however, these studies exhibit some drawbacks (or problem), as they do not know what the original image or noise density is. Interestingly, the proposed PSRNR can bridge the gap of the above problem. (3) Third, in general, the median filter only uses a majority decision strategy. This strategy has a serious problem: the median filter is ineffective in filtering out noise when the noise density is over 50%. Thus, this study combines the use of directional medians and fuzzy inference to overcome this problem. (4) Finally, this study uses the fuzzy inference membership function, which can shift dynamically to accommodate random-valued impulse noise according to the distribution of the filter window; thus, it can improve the performance of image processing.
The five well-known images, Lena, Peppers, Boat, Gold Hill, and Plane, were used as the experimental targets for evaluating the proposed AFWMF model. After all the experiments were run, this novel AFWMF model was found to significantly improve processing performances in the measurement of the PSNR value for preserving detail and fixed images at noise densities within the range of 20% to 70% for the experimental images. In extensive experiments, the method described in this study also showed a better performance in evaluating psycho-visual tests and identifying PSRNR than the other listed filter methods. Furthermore, the proposed AFWMF model also has better indicator of SSIM value. Conclusively, the proposed AFWMF model was generally found to be the best model among the listed 10 filtering methods for image processing in terms of the measurement of two quantitative indicators: PSNR and SSIM. From the experiments, five directions were found for presenting the empirical results of the proposed AFWMF model. (1) The proposed AFWMF model had a better performance in terms of the PSNR value and psycho-visual testing than the other filters for different random-valued impulse noises; however, the random-valued impulse noise is very difficult to remove from the related review of the scarce literature. (2) The proposed AFWMF model performed well in terms of restoration performance and sense of sight in this study. (3) The proposed AFWMF model showed a better restoration of detail or texture than the other filters, especially when the noise density ratio was within the range of 20 to 70%. (4) In total, this study provides good evidence that the proposed AFWMF model performs better in terms of PSNR and SSIE values. (5) Meaningfully, it was found that different impulse noise densities should be used with different filtering methods. It is important to find a suitable filtering model for processing images with various noise densities.
Interestingly, three meaningful findings for subsequent image processing applications were identified: (1) The proposed AFWMF model was the best one for processing different impulse noise densities in terms of both the PSNR and SSIM values. (2) It is important to find an appropriate filtering method for use with a variety of images. (3) From the empirical results, it is evident that the DWM model can be used for a low 10% noise density, the ROR model can be used for high 80% and 90% noise densities, and the AFWMF model can be used with 20-70% noise densities. Although the study shows some benefits of image processing, our techniques still need to be improved. For example, providing a new resolution for image processing will be necessary in future research. | 9,083.2 | 2021-07-30T00:00:00.000 | [
"Computer Science"
] |
The Impact of Incomplete Faces of Spokes-Characters in Mobile Application Icon Designs on Brand Evaluations
In this article, we explore how incomplete spokes-character faces (versus complete spokes-character faces in application icon designs) make a positive impression on users, and we outline the boundary conditions. Across three studies, we find incomplete spokes-character faces to be an effective image icon tool. In study 1, we find that spokes-characters with incomplete faces improve users’ brand evaluations. In study 2, we find that incomplete spokes-character faces create perceptions of anthropomorphism, which lead to more favorable brand evaluations by enhancing the interpersonal closeness between the user and the brand. The results of study 3, however, show that the type of social exclusion (control vs. ignored vs. rejected) moderates the relationship between incomplete spokes-character faces in mobile application icons and brand evaluations.
INTRODUCTION
Modern life has become more mechanized, automated and digitized, increasing convenience but also resulting in a loss of a sense of humanity (Schroll et al., 2018). Because human connection is a basic need for individuals (Baumeister and Leary, 1995), many managers aim to personalize their brands, products or spokes-characters by giving them human characteristics (Garretson and Burton, 2005;Aggarwal and Mcgill, 2012;Folse and Burton, 2012;Nazuk and Sajeev, 2018) to achieve brand differentiation.
Notably, marketers make full use of spokes-character strategies in mobile applications with the increasing popularity of mobile Internet. Currently, marketers often apply spokes-characters in icon designs for mobile applications to improve the liveliness of pages. In particular, spokescharacters are used in the designs of launch icons of apps. A spokes-character can act as the "face" of an app, which leaves an impression on users when they learn about the app in an ad or use it in daily life. The app launch icon (hereafter referred to as the icon) can be seen in app store lists, where the app can be downloaded, and can be clicked to access the app; in addition, the spokes-character faces on these icons can be manipulated. Interestingly, some firms (e.g., Hopster; see Appendix A) use spokes-characters with incomplete faces in icon designs, which we refer to as incomplete spokes-character faces, whereas other companies (e.g., Happy Cow Find Vegan Food; see Appendix A) use complete spokes-character faces in icon designs. However, it is not clear whether these incomplete spokes-character faces in icons truly enhance perceptions of anthropomorphism or generate any positive effects. Our primary research questions are whether the use of such incomplete spokes-character faces affect brand evaluations and, if so, how.
In previous studies, Hagtvedt (2011) shows that incompleteness firm's name (the characters of company's name are left intentionally blank. e.g., IBM) improves perceptions of a firm's innovativeness and lowers perceptions of its trustworthiness. Nazuk and Sajeev (2018) explored the effect of AWS (Active White Space: the space between individual logo design elements-as a modification of the logo design but still retain its existing associated style modification. e.g., the logo of Starbucks image) on pictorial logo designs. However, our work differs from Hagtvedt (2011) in that their work were limited to typeface logos or brand logo designs with intentionally blank, whereas we investigate the effect of the incomplete spokescharacter in the mobile icon that the face of the spokes-character seems to be intentionally hidden (e.g., Happy Cow Find Vegan Food; see Appendix A) and interact with users.
In response, this article offers a systematic investigation of the effects of the use of incomplete spokes-character faces in mobile application launch icons on brand evaluations. The facial completeness of spokes-characters in mobile application launch icon designs is easy to manipulate, and the launch icon often is the first touchpoint users have with a mobile application. Thus, in study 1, we start by demonstrating a positive effect of the use of incomplete spokes-character faces in the design of mobile application launch icons. The next study reveals the serial mediation process underlying this effect (completeness → perceptions of anthropomorphism → interpersonal closeness → brand evaluations). Study 2 shows that using an incomplete (vs. complete) spokes-character face increases perceptions of anthropomorphism, which lead to more positive brand evaluations by enhancing users' interpersonal closeness to the spokes-character. Finally, in study 3, we demonstrate that the opposite results are found when the individual is rejected than when the individual is ignored.
These findings contribute to several research streams. First, based on gestalt theory, some scholars believe that incomplete objects (e.g., product advertisements, product names, ad photos, and brand logos) may make individuals seek the closure of the designs (Nazuk and Sajeev, 2018), which may make impressions on them in the process and result in positive evaluations (Peracchio and Meyers-Levy, 1994;Henderson and Cote, 1998;Miller and Kahn, 2005;Pieters et al., 2010;Hagtvedt, 2011;Nazuk and Sajeev, 2018). In our research, visual metaphor theory illustrates why individuals positively evaluate incomplete spokes-character faces. Considering spokescharacters with anthropomorphic characteristics, we assume that when a spokes-character has an incomplete face in an icon, users may believe that the spokes-character is "playing" with them similar to the way a human would, which may make users feel closer to the spokes-characters and assess the brand positively.
Second, our idea that an incomplete spokes-character face humanizes an animal by creating perceptions of anthropomorphism is related to but different from previous studies on anthropomorphism. Anthropomorphism refers to the attribution of human-like qualities to non-human objects (Aggarwal and Mcgill, 2007;Epley et al., 2007) by making a product appear alive (Waytz et al., 2010). Although the tendency to anthropomorphize is pervasive, people do not anthropomorphize all objects (Guthrie, 1993), nor are they able to anthropomorphize different objects with equal ease. The literature suggests that the ability to anthropomorphize may depend on the presence of specific features (Tremoulet and Feldman, 2000). For example, movement in an object (a spokescharacter with an incomplete face) can create the impression that it is alive (Tremoulet and Feldman, 2000). We extend the scope of anthropomorphism theory. This broader perspective not only enhances the theoretical understanding of humanization processes but also provides a novel way for companies to endow their mobile application icons with anthropomorphic features.
Third, previous studies on spokes-characters have focused on the effects of its characteristic (e.g., sincerity, excitement, and competence, Callcott and Alvey, 1991) on building brand equity (Folse and Burton, 2012;En-Chi, 2014) or on brand-defending (Aaker et al., 2004;Folse et al., 2013). However, there is a lack of research on the shape characteristics of spokes-characters. In the present research, we emphasize the impact of incompleteness in the design of spokes-characters' faces in icons on the perceptions of anthropomorphism and users' perceptions of interpersonal closeness between users and spokes-characters, which benefits brand evaluations.
Incomplete Spokes-Character Faces
Spokes-characters, often referred to as advertising icons (visual images that are cartoon-or human-like), offer potential benefits to brand equity and can symbolically communicate a brand's attributes, personality, or benefits (Garretson and Burton, 2005;Folse and Burton, 2012). Most research on spokes-characters concerns brand building, such as establishing unique and favorable brand evaluations (Barbara, 1996;Phillips, 1996). In detail, the characteristics (e.g., likability, expertise, and attractiveness) help shape brand perceptions (Callcott and Alvey, 1991) or brand equity (Folse and Burton, 2012;En-Chi, 2014). In addition, a few scholars have studied the role of spokescharacters' personality traits in brand defense (Aaker et al., 2004;Folse et al., 2013).
Most scholars have focused on the effect of the personality of the spokes-character on brand evaluations. However, few studies have explored how to design spokes-characters in specific situations, such as in icons for mobile applications or web pages. We find that there are two kinds of spokes-character designs in icons: those with incomplete faces and those with complete faces. Previous studies have provided some insight into completeness (e.g., in sculptures, brand names, and brand logos). Studies have shown that incomplete objects prompt people to seek closure or provide explanations for missing parts by making them search for missing parts, which leads to positive product evaluations (Peracchio and Meyers-Levy, 1994). Moreover and Hagtvedt (2011) indicates that incompleteness in brand name designs (when the letters in the name are intentionally left blank, such as the logo of IBM) induces consumer perceive the company more innovativeness and lowers trustworthiness and this influence is mediated by the perceived interestingness of the incomplete brand name compared with the complete one. In contrast, consumers perceive the complete brand names to be more reliable and less innovative and this effect is mediated by the perceived clarity of the name (Hagtvedt, 2011).
Generally speaking, the signal may be a perceptible stimulus. Literatures on social signal processing indicate that an individual's sensory device receives a physical stimulus (a pattern of physical energy), and dynamically models the influence of Gestalt's law (Wertheimer, 1938). In addition, the signal may be developed by a virtual character, an animal, a machine or other entities (Poggi and D'Errico, 2010a). In particular, the signal is an information (a simple or complex perception produced by one or more physical stimuli), from which the receiver can extract more information (Vinciarelli et al., 2011). Therefore, the spokes-character in the icon, a kind of signal, also can make users gain information and develop perceptions to the icon or the brand. Some scholars propose that compared with a complete logo, an incomplete logo can promote consumers to make an effort to comprehend the logo, which makes the brand communication information clearer and leads to more positive visual evaluations (Nazuk and Sajeev, 2018). A complete entity makes people feel comfortable in terms of the visual balance (Pracejus et al., 2006;Olsen et al., 2012). In addition, some studies have shown that perceptual ambiguity caused by incomplete objects prompts people to seek closure or provide explanations for missing, which can improve evaluations by stimulating positive emotions (Peracchio and Meyers-Levy, 1994). In providing the missing visual portions themselves, individuals would induce the sense of accomplishment when they resolve the slight visual ambiguity, which increases the overall positive affect derived in the process (Peracchio and Meyers-Levy, 1994;Nazuk and Sajeev, 2018). Similarly, an incomplete object (e.g., a logo or name) may prompt people to seek closure for the missing part, thereby inducing positive attitude after that (Peracchio and Meyers-Levy, 1994;Zhao and Meyer, 2007;Nazuk and Sajeev, 2018).
Based on conceptual metaphor theory, our research speculates that when spokes-characters are featured in an icon design, the user will focus on providing explanations for a spokescharacter with an incomplete face (the spoke-character is playing with users) rather than try to imagine the complete face of the spokes-character (seeking closure). Metaphor is the basis of people's cognition, thinking, experience, language and even behavior. It is the mapping composition in different experience fields or conceptual structures. Metaphor theory emphasizes the similarity between ontology objects and metaphor objects (Lakoff and Mark, 1980). Visual metaphor is a type of metaphor that indicates that an object is similar to another object by comparing two completely different images (Lagerwerf et al., 2012) and that provides an effective way to understand new things through the transplantation of elements between unfamiliar and familiar things (Zhong and Zhang, 2009). Previous research has shown that visual metaphors cannot be fully described in formal terms. Instead, they must be viewed as visual representations of metaphorical ideas or concepts (Refaie, 2003). Visual metaphor is a kind of metaphor. We define visual metaphor as a conceptual metaphor, which is caused by visual stimuli and causes personal thinking. Individuals tend to make comparisons with sentimentally similar objects, which are likely to be based on size, shape, spatial orientation, color, etc. (Schilperoord et al., 2009). Moreover, visual metaphors include four aspects: (1) natural phenomena, (2) artificial reality, (3) activities, and (4) abstract concepts (Eppler and Burkard, 2004). Visual metaphor involves physical similarity (e.g., between an egg and the earth) or psychological similarity (e.g., between a dove and peace).
What is more, the signal may be a perceptible stimulus. Literatures on social signal processing indicate that an individual's sensory device receives a physical stimulus (a pattern of physical energy), and dynamically models the influence of Gestalt's law (Wertheimer, 1938). In addition, the signal may be developed by a virtual character, an animal, a machine or other entities (Poggi and D'Errico, 2010a). In particular, the signal is an information (a simple or complex perception produced by one or more physical stimuli), from which the receiver can extract more information (Vinciarelli et al., 2011). Therefore, the spokescharacter in the icon, a kind of signal, also can make users gain information and develop perceptions to the icon or the brand. Besides, social emotions refer to those emotions related to social relations (Lewis, 2000), such as pride, shame or embarrassed, or feelings of disgust, jealousy, contempt, admiration to others (Rizzi, 2007;Poggi and Zuccaro, 2008). The expression of some social emotions is social signals (e.g., interest, empathy, hostility, agreement, dominance, superiority, etc.) as these emotions indicate a specific relationship with others (Vinciarelli et al., 2011). Social exclusion will make individuals generate negative social emotions, which leads to them convey negative social signal to others. We suggest that an incomplete spokes-character face has similar characteristics to the person who is playing hide-and-seek game with us. When a spokes-character's face is incomplete, the individual will regard the spokes-character as a person who seems to be playing with him or her. That is, spokes-characters with incomplete faces visually appear to be engaging in an activity similar to a real human.
Prior advertising research has shown that visual metaphors with moderate complexity have more positive effects on evaluations than simpler or much more complex visual metaphors (Mulken et al., 2014). Similarly, compared with a complete spokes-character face (no visual metaphor) in an icon, we expect incomplete spokes-character face (a moderately complex visual metaphor) to have a much more positive impact on brand evaluations. Therefore, we expect that incomplete spokes-character faces in icons can make users develop more positive brand evaluations than complete spokes-character faces. Accordingly, we propose the following hypothesis: H1: Adopting an incomplete (vs. complete) spokes-character face in a mobile application icon will lead to more positive brand evaluations.
Anthropomorphism and Interpersonal Closeness
Anthropomorphism refers to the attribution of human characteristics to non-human entities (Guthrie, 1993;Epley et al., 2007Epley et al., , 2008. Prior works have shown that anthropomorphism using visual or linguistic portrayals can evoke a human schema (Guthrie, 1993;Epley et al., 2007Epley et al., , 2008, which promotes consumers' perceptions of the product as having human-like characteristics (Aggarwal and Mcgill, 2007). Previous studies on incomplete aesthetic works, brand names and brand logos are different from those on incomplete spokes-character faces. The former studies focused on the cognition of individuals when they see complete incomplete images or typefaces (Peracchio and Meyers-Levy, 1994;Zhao and Meyer, 2007;Nazuk and Sajeev, 2018). Scholars put forward that incompleteness prompts individuals to complete the entity and make it completed, thus bringing a positive attitude. However, we argue that the findings of the latter studies reflect the positive effects of anthropomorphized spokes-characters. The literature suggests that the ability to anthropomorphize may depend on the presence of specific features (Guthrie, 1993). One of the motivations for people to anthropomorphize objects is that they can make better sense of the environment around them (Guthrie, 1993). Individuals make use of what they have already mastered or have become familiar with to help them understand the things that they know less about by attributing human-like characteristics to objects (Aggarwal and Mcgill, 2007). When the company uses an incomplete spokes-character face in an icon, users may view the spokes-character as playing with them similar to the way a person would to explain why the spokes-character has an incomplete face.
Furthermore, not all objects can give rise to anthropomorphic perceptions. If an object can move, it may make an individual perceive that it is alive (Tremoulet and Feldman, 2000). However, an object that moves quite slowly (e.g., clocks) may seem to lack human characteristics in this aspect (Morewedge et al., 2004). Moreover, some scholars have proposed that a static logo can lead to the perception of movement, which may enhance consumer engagement and attitudes (Cian et al., 2014). In our study, a spokes-character with an incomplete face (a static image) in an icon design can also make users feel that the spokescharacter appears to be moving because it creates the illusion that it is "playing" with users, which may promote a perception of movement and prompt individuals to anthropomorphize the spokes-character. In addition, the literature suggests that pictures in advertising, which creates a metaphor for a product that appears to be engaged in some type of human behavior, may result in perceptions of anthropomorphism and generate greater brand preferences (Delbaere et al., 2011). Therefore, in our study, we assume that using an incomplete spokes-character face in an icon may generate user perceptions of the movement and anthropomorphism of the spokes-character.
In our research, based on visual metaphor theory, we believe that using an incomplete (vs. complete) spokes-character face in a launch icon will result in higher degrees of anthropomorphism of the spokes-character as well as users' perceptions of interpersonal closeness, which leads to the development of positive brand evaluations. Interpersonal closeness is a dimension of interpersonal communication (Burgoon and Hale, 1987) that refers to the perception of interpersonal distance during interpersonal communication. In our study, interpersonal closeness refers to the perception of closeness between users and the spokes-character. If a person perceives closeness with another person, he or she may be more likely to develop trust, communication and intimacy with the other person (Woosnam, 2010). However, regarding users, the level of closeness perceived with spokes-characters in icons is quite difficult for companies.
In addition, a spokes-character is not only a brand identification symbol but also an important way for brands to establish relations with consumers (Callcott and Phillips, 1996). The incomplete spokes-character as a virtual character convey social emotions to users. The expression of some social emotions is social signals (e.g., interest, empathy, hostility, interactivity, etc.) as these emotions indicate a specific relationship with others (Vinciarelli et al., 2011).
Therefore, when a company uses an incomplete spokescharacter face in an icon, users of the mobile application will perceive the spokes-character to be dynamic. It will appear that the spokes-character is playing with users. That is, a spokes-character with an incomplete face makes users develop closer interpersonal perceptions with it by enhancing their perceptions of anthropomorphism, which will lead them to develop more positive brand evaluations. Therefore, we offer a second hypothesis: H2: Adopting an incomplete (vs. complete) spokes-character face in designing a mobile application icon will enhance users' perceptions of anthropomorphism, which will lead to more favorable brand evaluations by enhancing interpersonal closeness.
Social Exclusion
Social exclusion is a common phenomenon (Mazzini et al., 2011). For instance, Leary et al. (2003) found that the principals of the 13 shootings have all been rejected in 15 school shootings from 1995 to 2001, and that there may be a close connection between the rejection and the attack. As a kind of social animal, human beings must rely on groups to obtain better opportunities for survival, reproduction and development (Su et al., 2017). If an individual is rejected by others, its life will be seriously threatened. Social exclusion has two forms: being ignored and being rejected (Molden et al., 2009;Lee and Shrum, 2012;Sinha and Fang-Chi, 2019). Social neglect refers to when an individual is ignored by others or excluded from a group in social communication, which develops negative emotions or evaluations (Williams, 2007). Social rejection refers to an individual being specifically excluded and negatively evaluated by others or groups, which leads to strong negative emotional experiences (Leary et al., 2003). Individuals who suffer from social rejection may clearly feel the rejection of others and believe that they are not liked, which can even make them adopt revenge behaviors, such as reducing pro-social behaviors (Twenge et al., 2007). However, when individuals are ignored by others, they will engage in pro-social behaviors . In addition, empirical studies have compared the effects of different types of exclusion on individuals. These studies have shown that rejected individuals have stronger preventive motivation. Rejected individuals may avoid social interaction and try to avoid behaviors that may lead to exclusion. Those who are ignored have stronger primitive motives. To form interpersonal connections with others, they actively participate in social interactions and consider what measures can be taken to avoid being excluded (Molden et al., 2009;Su et al., 2017). These findings suggest that different types of social exclusion elicit different responses. The exclusion of an individual sometimes is accompanied by explicit antipathy from other people but sometimes is not (Twenge et al., 2001;Lee and Shrum, 2012).
In our work, we propose that individuals who are ignored by others will develop positive brand attitudes toward a company that uses an incomplete spokes-character face in an icon. Existing empirical studies have compared the impact of different types of exclusion on individuals, and found that rejected people will have stronger prevention motivations (prevention motivations), avoiding social interactions, pay attention to avoid behaviors that may lead to exclusion; the neglected people have stronger promotion motivations, actively participate in social interactions, and consider what measures can be taken to avoid exclusion (Molden et al., 2009). Some scholars have reported that individuals who are treated with pro-social behavior have more positive emotions toward and evaluations of the group and others. In this way, their basic needs for belonging are satisfied (Williams, 2007). Therefore, pro-social behavior not only benefits the recipient but also enables the individual who shows prosocial behavior to obtain more positive social evaluations (Clary et al., 1998) and more positive emotional experience (Aknin et al., 2015). We suggest that a spokes-character with an incomplete face will make individuals who are being ignored feel that the spokes-character is showing pro-social behavior toward them and that they will evaluate the brand more positively than a brand with a complete spokes-character face.
In addition, we suspect that the individuals who are being rejected will negatively assess a brand that uses a spokes-character with an incomplete face in its icon design. Studies have found that social exclusion triggers individuals' negative emotions and negative influences individuals' self-evaluations (Baumeister et al., 2005), such as bitterness. The bitterness is a negative emotion between anger and sadness. It is usually caused by betrayal, which comes from the disappointment of the emotional expectations of oneself or others (Poggi and D'Errico, 2010b). This negative emotion affects individuals' social behaviors (Twenge et al., 2002). The expression of some social emotions is social signals as these emotions indicate a specific relationship with others (Vinciarelli et al., 2011). Previous research has shown that participants with anticipated regret forces would chose the safe entity, avoiding risk-aversion (Zeelenberg et al., 1996). So we believe that even when a company's icon design uses a spokescharacter with an incomplete face, people who are rejected would not have a need to form relationships with other objects and they will lack motivation to personify objects, which make them withdrawal from social contact (Twenge et al., 2002(Twenge et al., , 2007. Hence, they can not feel the social signal from the spokescharacter with incomplete face in the icon. To the opposite, the individual who are ignored by others would increase social sensitivity and renewed efforts toward social connection and they can realize the social signal from the spokes-character with incomplete face in the icon. The reason why people tend to anthropomorphize objects is that doing so may comfort them by providing relationships or companionship (Guthrie, 1993). Individuals who are rejected may develop preventive motivation, which causes people to avoid social contact to avoid the possibility of experiencing a lack of belonging (Molden et al., 2009). Therefore, we believe that even when a company's icon design uses a spokes-character with an incomplete face, people who are rejected will not have a need to form relationships with other objects. Hence, they will lack motivation to personify objects, so they will not feel interpersonal closeness with the spokes-character. Accordingly, we hypothesize the following: H3a: When an individual is rejected, the use of an incomplete (vs. complete) spokes-character face in a mobile application icon design will reduce users' perceptions of interpersonal closeness, which will lead to more negative brand evaluations.
H3b: When an individual is ignored, the use of an incomplete (vs. complete) spokes-character face in a mobile application icon design will enhance users' perceptions of interpersonal closeness, which will lead to more positive brand evaluations.
Study 1
In experiment 1, we manipulated an incomplete spokes-character face to test the effect of the incomplete spokes-character face on brand evaluations. We predicted that participants would have more positive brand evaluations when the icon design featured an incomplete spokes-character face than when it featured a complete spokes-character face. In study 1, we employed a one-factor (completeness: incomplete vs. complete) betweensubjects design.
Procedure
First, we introduced a fictional clothing brand called "HO" to the participants. Then, we told them that "Little H" was the spokescharacter of "HO" and presented participants with one of two versions of the mobile application icon images (see Appendix B) that included either a complete or an incomplete face of "Little H." After that, the participants were asked to evaluate the facial completeness of "Little H" (1 = incomplete, 7 = complete). Subsequently, we also measured how the participants evaluated the brand with four seven-point scales: "Please evaluate this brand on the following dimensions: dislike/like, bad/good, unappealing/appealing, and unfavorable/favorable" (α = 0.87) (Puzakova and Aggarwal, 2018). In addition, the subjects were asked to report their emotions with four items: not at all happy/very happy; not at all happy/very happy; not at all excited/very excited; not at all hopeful/very hopeful; in a bad mood/in a good mood; not at all excited/very excited; not at all hopeful/very hopeful; in a bad mood/in a good mood (α = 0.86) (Hagtvedt, 2011). All scales used a 7-point scale (1 = strongly disagree and 7 = strongly agree). Finally, we collected the demographic information and completed the experiment.
Pretest
Prior to the main experiment, 52 undergraduate students (32 females, M age = 20.04, SD age = 0.97) from Shenzhen University in China participated in our study. They were randomly assigned to two groups (completeness: incomplete vs. complete). Then, we showed one of two icons (see Appendix B) and asked them to evaluate the completeness of the spokescharacter's face in the icons. In addition, to test whether there was a difference in the perceived cuteness between the two groups, the participants reported the likability (α = 0.92) of the spokescharacter (Callcott and Alvey, 1991). Finally, the participants provided information about their gender and age.
To test whether the manipulation of the facial completeness of the spokes-character in the icon was successful, we conducted a one-way ANOVA with completeness as the dependent variable. The results showed that there was a significant difference between the two groups [F(1,50) = 30.52, p < 0.001]. The participants in the complete group indicated a higher degree of completeness (M C = 6.22, SD = 0.75) higher than those in the incomplete group (M IC = 4.60, SD = 1.20). In addition, the spokes-character with an incomplete face was not considered to be cuter (M IC = 4.71, SD = 1.22) than the spokes-character with an incomplete face [M C = 4.82, SD = 0.75; F(1,50) = 0.13, p = 0.72], indicating that the effect was not driven by the perceived cuteness of each design.
Method
One hundred participants were recruited from a Chinese online platform 1 that is similar to MTurk Prime. The platform provides a large subject pool with qualified participants. Four participants who failed to complete the survey were excluded from the final analyses. The remaining 96 participants provided complete datasets (71 females; M age = 24.59, SD age = 8.22).
Manipulation check
We conducted a one-way ANOVA of users' perceptions of the facial completeness of the spokes-characters. The results showed that the participants in the complete condition rated the face as more complete (M C = 5.67, SD = 1.16) than those in the incomplete condition [M IC = 4.18, SD = 1.16, F(1,95) = 40.53, p < 0.001], confirming that our manipulation was successful.
Brand evaluations
The results of the one-way ANOVA of brand evaluations showed that the main effect of the facial completeness of spokes-character was significant. The participants the incomplete spokes-character face condition reported more positive brand evaluations (M IC = 5.04, SD = 1.02) than those in the complete spokes-character face condition [M C = 4.21, SD = 1.00, F(1,95) = 16.01, p < 0.001]. Therefore, compared with an 1 https://www.sojump.com/ complete spokes-character face in an icon design, an incomplete spokes-character face leads to more positive brand evaluations, which verifies hypothesis 1.
Control variables
Furthermore, an incomplete spokes-character face might affect perceptions of the spokes-character's personality. We examined the emotions of the participants. The data indicated that the completeness of the spokes-characters' faces affected the participants' emotions [M IC = 4.88, SD = 0.89; M C = 4.19, SD = 126, F(1,95) = 9.68, p < 0.01]. However, when we controlled for emotion, a positive effect of the incomplete spokes-character face still existed.
Study 1 provided evidence of our main prediction: using a spokes-character with an incomplete (vs. complete) face in a launch icon positively affects brand evaluations. In the next study, we tested the mechanism of the positive effect of an incomplete spokes-character face in an icon design on brand evaluations with a mediation analysis.
Study 2
In this study, we measured the perceived anthropomorphic and interpersonal closeness of the spokes-character to test the underlying mechanism of the positive effect. We investigated whether the use of an incomplete (vs. complete) spokes-character face in a launch icon enhanced perceptions of anthropomorphism and in turn increased participants' perceptions of interpersonal closeness to the spokes-character to ultimately lead to more favorable brand evaluations.
Design
We employed a single-factor (completeness: incomplete vs. complete) between-subjects design. One hundred thirty participants with various backgrounds were recruited from Sojump 1 . Two participants who failed to complete the survey were excluded from the final analyses. The remaining 128 participants provided complete datasets (68 females; M age = 29.29, SD age = 7.25). We used the download interface of a mobile application for a fictitious snack brand (Bear Snack) as our stimulus to extend our findings (see Appendix C).
Procedures
We introduced there was a fictitious snack APP and show the interface of the application to participants. After that, the participants answered the same manipulation item as in study 1. Then, we asked the participants to evaluate the facial completeness of the spokes-character (α = 0.88). Next, we measured perceptions of anthropomorphism with three items ("It seems almost as if the spokes-character has a mind of its own"; "To what extent does the spokes-character remind you of some human-like qualities?"; "The spokes-character looks like a person") (1 = "not at all" and 7 = "very much") (α = 0.77) (Puzakova et al., 2013;Hur et al., 2015;Puzakova and Kwak, 2017) and interpersonal closeness with three items (including "I feel so closed to the spokes-character" "There is a similarity between me and the spokes-character" and the Inclusion of Other in the Self Scale (IOS) adopted from prior research on interpersonal evaluations (α = 0.79) (Aron et al., 2000;Orehek et al., 2018). Finally, the participants provided brief demographic information.
Manipulation check
We conducted a manipulation check on facial completeness and demonstrated that users in the complete face condition perceived the facial completeness of the spokes-character to be more complete than those in the incomplete face condition [M C = 6.03, SD = 1.04; M IC = 3.95, SD = 1.50; F(1,127) = 83.40, p < 0.001].
Serial mediation analyses
The data indicates (see Figure 1) that compared with the complete group, the group of the spokes-character with incomplete face in the icon cause more positive perceptions of anthropomorphism [M IC = 4.62, SD = 1.21; M C = 4.10, SD = 1.09; F(1,127) = 10.33, p < 0.001; see Figure 1] and interpersonal closeness [M IC = 4.54, SD = 1.32; M C = 3.98, SD = 1.25; F(1,127) = 15.21, p < 0.001]. To further examine the underlying mechanism of the effect of an incomplete spokes-character face in an icon design, we tested regression models with incomplete spokes-character face, perceptions of anthropomorphism, and interpersonal closeness as the mediators and brand evaluations as the dependent variable (Hayes, 2013, Model 6) using a bootstrapping approach. Consistent with H2, we found that perceptions of anthropomorphism and interpersonal closeness mediated the effect of incomplete spokes-character face on brand evaluations (95% CI [−0.0647, −0.0049]).
The results indicated that the positive impact of an incomplete spokes-character face is driven by enhanced perceptions of anthropomorphism and then by enhanced interpersonal closeness, supporting H2. That is to say, compared with the complete face spokes-character in the mobile icon, when users see the spokes-character with incomplete face, the perceptions of anthropomorphism and interpersonal closeness with the spoke-character would be more stronger, which induces more positive users evaluations. In the next study, we explored the boundary conditions.
Design
In the experiment, we employed a 2 (completeness: incomplete vs. complete) × 3 (social exclusion: rejected vs. ignored vs. control) between-subjects design. Four hundred and ten participants were recruited from a Chinese online panel 2 . Eight participants who failed to complete the survey were excluded from the final analyses. The remaining 402 participants provided complete datasets (255 females; M age = 26.19, SD age = 7.07).
Procedures
First, we manipulated the type of social exclusion. In the control condition, the participants were asked to recall something that happened to them yesterday and write it down. In the rejected condition, the participants were asked to recall an experience of being strongly excluded for a period of time and being told they were not accepted because of a strong dislike or other reasons, and then they were asked to write it down. In the ignored condition, we asked subjects to recall an experience of being strongly ignored in which no one told them they disliked them, but others ignored them or obviously ignored their responses, and then they were asked to record it. Next, the participants evaluated the degree of being ignored/rejected (Molden et al., 2009). Furthermore, after the manipulation, the subjects reported the extent to which they were ignored and rejected (Molden et al., 2009). Second, we presented the APP icons with different completeness and the manipulation and manipulation check for the facial completeness of the spokes-character in the icon design were similar to those in study 1. In order to present different stimulation scenes, the interface in this study was a download interface (see Appendix B). Subsequently, we presented the icon again and measured the participants' brand evaluations as in study 1 and their perceptions of anthropomorphism and interpersonal closeness (Woosnam, 2010;Orehek et al., 2018). Finally, we collected the demographic information of the subjects.
Manipulation check
First, we need to verify that there were significant differences among the three groups of subjects in the two manipulation check items for the types of social exclusion. We analyzed the degree of rejection and neglect under three conditions (rejected vs. ignored vs. control). The post hoc test (LSD test) showed that in the first question (the degree of being ignored), there were significant differences between the ignored condition (M = 5.14, SD = 1.70) and the control condition [M = 3.28, SD = 1.71; F(1,273) = 81.46, p < 0.001]. Additionally, the participants in the ignored condition reported a higher level Therefore, when individuals are ignored, they have more positive brand evaluations when an incomplete spokes-character face is featured in an icon. However, when individuals are rejected, their evaluations of brands with complete spokes-character face become more negative. Moreover, there were no significant differences between the control group and the ignored group for either the incomplete spokes-character face [M C = 5.26 vs.
Serial mediation analyses
The data showed that: in the control group, the spokescharacter with the incomplete face makes the perceptions of anthropomorphism [M IC = 4.87, SD = 1.19; M C = 4.13, SD = 1.28; F(1,139) = 12.79, p < 0.001] and interpersonal closeness [M IC = 4.19, SD = 1.27; M C = 3.44, SD = 1.18; F(1,139) = 13.12, p < 0.001] more stronger (see Figure 3). And the results the neglected group is same as to the control group (see Figure 4) Figure 5). We predicted that using an incomplete (vs. complete) spokes-character face in a mobile application icon would generate perceptions of anthropomorphism, which would lead to greater user perceptions of interpersonal closeness to the spokes-character and, ultimately, more positive brand evaluations (e.g., completeness → perceptions of anthropomorphism → interpersonal closeness → brand evaluations). To test this theoretical framework, we conducted a serial mediation analysis (Hayes, 2013, model 6, n = 5,000). The serial mediating effect was significant. In the ignored group, the mediating effect was significant and positive (95% CI [−0.2756, −0.0372], and the same result was observed for the control condition (95% CI [−0.2429, −0.0333]). In addition, in the rejected group, the mediating effect was significant (95% CI [0.0337, 0.1906]).
The results show that when the individual is ignored by other persons, the spokes-character in the icons with incomplete (vs. complete) face will induce more positive brand evaluations. Because the neglected persons are long for interpersonal relationships and are more sensitive to perceive the anthropomorphism of the spoke-character and social signals (interpersonal closeness) transmitted by incomplete faces, which leads to positive brand attitudes. In contrast, when the individual is rejected by others, the complete (vs. incomplete) face of the spokes-character in the APP icon will bring a more positive brand evaluation. The reason is the rejected individuals consciously avoid social interactions resulting generating resistance to living entity and social signals that transmitted by incomplete faces, which develops negative brand evaluations. Therefore, the results support hypotheses H3a and H3b.
GENERAL DISCUSSION
This study assessed the influence of incomplete spokes-character faces in mobile application icons on brand evaluations. Based on three experiments, we provide evidence for, reveal the mechanisms of, and outline the boundary conditions of the positive effect of incomplete spokes-character faces. In our first study, we show that using an incomplete spokes-character face in an icon enhances users' brand evaluations. Study 2 reveals the underlying mechanisms: an incomplete spokes-character face leads to more favorable brand evaluations because the incompleteness humanizes the spokes-character by creating perceptions of anthropomorphism and thereby enhancing users' perceptions of interpersonal closeness to the spokes-character. Moreover, we demonstrate that the effect is reversed for individuals who are being rejected (study 3).
There are some limitations to our research. In the future, scholars can conduct in-depth research on how to design an icon to increase the anthropomorphic perceptions of the spokes-character. First, we extend the humanization literature by introducing a spokes-character strategy in icon design rather than relying on previous research on anthropomorphism theory (Epley et al., 2007;Puzakova et al., 2013). That is, a spokescharacter might be humanized by showing an unusual face, which may be a way to help users remember the spokes-character for a long time (Chiu et al., 2009). Our study shows that an incomplete spokes-character face can also provide humanization cues. Moreover, future research can explore movement of the spokescharacter in an icon, which might enhance anthropomorphic perceptions (Calvert, 2008). Second, we contribute to the literature on incomplete objects, such as product names (Miller and Kahn, 2005), ad photos (Peracchio and Meyers-Levy, 1994), and logos (Henderson and Cote, 1998;Hagtvedt, 2011;Nazuk and Sajeev, 2018). Our research shows the effect of an incomplete spokes-character face in a mobile application icon on users' brand evaluations. Future research can explore the effect in product packaging or advertisement design.
This research introduces a novel way for companies to humanize their brands which applies the incomplete spokescharacter faces in mobile application icons. The current study further shows that the use of an incomplete spokescharacter face improves brand evaluations through perceptions of anthropomorphism, which enhance users' perceptions of interpersonal closeness to the spokes-character. However, the positive effect of an incomplete spokes-character face is less pronounced when users are rejected by others. In summary, this research serves as a foundation for examining humanization efforts in marketing communication that go beyond the use of anthropomorphism. We hope that further research will explore the moderation of brand personality (Aaker, 1997). Incomplete (vs. complete) spokes-character faces may lead users to perceive spokes-characters to be more exciting. According to perceptual fluency theory (Lee and Labroo, 2004;Nazuk and Sajeev, 2018), if a brand positions its personality as exciting and reflects this excitement with an incomplete spokes-character face in an icon, consumers can perceive this excitement in advertising and process it easily, which is good for brand communication. What's more, social exclusion induces powerful motivations and a variety of negative emotions (e.g., sadness and fear) (Williams, 2007;Molden et al., 2009). Therefore, in the future study, the negative emotion should be taken into account.
Finally, Čeněk and Šašinka (2015) believe that there are cultural based differences between visual perception and related cognitive processes on attention and memory. According to their research, East Asians and Westerners perceive and think about objects differently. The Westerners are prone to pay attention to some focus objects (the size, movement and colors of the object) to and analyze its attributes. However, East Asians are more preference to focusing on a wide range of perception areas and notice relationships and changes of the objects. Hence, the incomplete face of spokes-characters in icons of our work will bring higher anthropomorphic perception and stronger interpersonal closeness, which may only be apply to eastern participants, and this effect may not exist for western participants.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
ZNi, LC, HY, and TZ contributed conception and design of the study. ZNi organized the database. LC performed the statistical | 9,807 | 2020-07-31T00:00:00.000 | [
"Business",
"Computer Science"
] |
Compressive sensing and adaptive sampling applied to millimeter wave inverse synthetic aperture imaging
In order to improve speed and efficiency over traditional scanning methods, a Bayesian compressive sensing algorithm using adaptive spatial sampling is developed for single detector millimeter wave synthetic aperture imaging. The application of this algorithm is compared to random sampling to demonstrate that the adaptive algorithm converges faster for simple targets and generates more reliable reconstructions for complex targets. © 2017 Optical Society of America OCIS codes: (100.3010) Image reconstruction techniques; (280.6730) Synthetic aperture radar; (280.4750) Optical processing of radar images; (110.6795) Terahertz imaging. References and links 1. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory 52(2), 489–509 (2006). 2. D. L. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). 3. X. Yuan, T. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9(6), 964–976 (2015). 4. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007). 5. D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” Proced. Comput. Imaging, SPIE 6065, Computational Imaging IV, 606509 (2006). 6. W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008). 7. L. Xiao, K. Liu, D. Han, and J. Liu, “A compressed sensing approach for enhancing infrared imaging resolution,” Opt. Laser Technol. 44(8), 2354–2360 (2012). 8. O. Furxhi, D. L. Marks, and D. J. Brady, “Echelle crossed grating millimeter wave beam scanner,” Opt. Express 22(13), 16393–16407 (2014). 9. J. Greenberg, K. Krishnamurthy, and D. Brady, “Compressive single-pixel snapshot x-ray diffraction imaging,” Opt. Lett. 39(1), 111–114 (2014). 10. D. J. Brady, D. L. Marks, K. P. MacCabe, and J. A. O’Sullivan, “Coded apertures for x-ray scatter imaging,” Appl. Opt. 52(32), 7745–7754 (2013). 11. D. J. Brady, Optical imaging and spectroscopy. 2009. 12. A. Mrozack, M. Heimbeck, D. L. Marks, J. Richard, H. O. Everitt, and D. J. Brady, “Compressive and adaptive millimeter-wave SAR,” Opt. Express 22(11), 13515–13530 (2014). 13. L. P. Song, C. Yu, and Q. H. Liu, “Through-wall imaging (TWI) by radar: 2-D tomographic results and analyses,” IEEE Trans. Geosci. Remote Sens. 43(12), 2793–2798 (2005). 14. D. M. Sheen, D. L. McMakin, and T. E. Hall, “Three-dimensional millimeter-wave imaging for concealed weapon detection,” IEEE Trans. Microw. Theory Tech. 49(9), 1581–1592 (2001). 15. K. B. Cooper, R. J. Dengler, N. Llombart, T. Bryllert, G. Chattopadhyay, E. Schlecht, J. Gill, C. Lee, A. Skalare, I. Mehdi, and P. H. Siegel, “Penetrating 3-D imaging at 4and 25-m range using a submillimeter-wave radar,” IEEE Trans. Microw. Theory Tech. 56(12), 2771–2778 (2008). 16. M. S. Heimbeck, D. L. Marks, D. Brady, and H. O. Everitt, “Terahertz interferometric synthetic aperture tomography for confocal imaging systems,” Opt. Lett. 37(8), 1316–1318 (2012). 17. E. Yiğit, “Compressed sensing for millimeter-wave ground based SAR/ISAR imaging,” J. Infrared Millim. Terahertz Waves 35(11), 932–948 (2014). Vol. 25, No. 3 | 6 Feb 2017 | OPTICS EXPRESS 2270 #278540 https://doi.org/10.1364/OE.25.002270 Journal © 2017 Received 12 Oct 2016; revised 10 Jan 2017; accepted 10 Jan 2017; published 27 Jan 2017 18. W. L. Chan, J. Deibel, and D. M. Mittleman, “Imaging with terahertz radiation,” Rep. Prog. Phys. 70(8), 1325– 1379 (2007). 19. M. Martorella, J. Palmer, F. Berizzi, and B. Bates, “Advances in Bistatic Inverse Synthetic Aperture Radar,” Radar Conference Surveillance for a Safer World, 2009. RADAR. International. pp. 1–6, 2009. 20. R. Baraniuk and P. Steeghs, “Compressive radar imaging,” in IEEE National Radar Conference Proceedings, 128–133 (2007) 21. L. Yu and Y. Zhang, “Random step frequency CSAR imaging based on compressive sensing,” Prog. Electromagn. Res. C 32, 81–94 (2012). 22. J. H. G. Ender, “On compressive sensing applied to radar,” Signal Process. 90(5), 1402–1414 (2010). 23. S. Ji, Y. Xue, and L. Carin, “Bayesian Compressive Sensing,” IEEE Trans. Signal Process. 56(6), 2346–2356 (2008). 24. M. Tipping, “Sparse Bayesian Learning and the Relevance Vector Machine,” J. Mach. Learn. Res. 1, 211–214 (2001). 25. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). 26. I. Daubechies, “The wavelet transform, time-frequency localization and signal analysis,” IEEE Trans. Inf. Theory 36(5), 961–1005 (1990). 27. G. Lauritsch and W. H. Haerer, “Theoretical framework for filtered back projection in tomosynthesis,” Proc. SPIE 3338, 1127–1137 (1998).
Introduction
Compressive sensing (CS) algorithms [1,2] are increasingly being adapted for image acquisition because they may increase the dimension of images, such as by adding side information to resolve spatial images spectrally and temporally as well [3].They also significantly improve single detector imaging methods by increasing the sampling speed and efficiency over raster-scanning [20], as demonstrated in applications as diverse as optical coherence tomography (OCT) [4], single photon imaging [5,6], infrared imaging [7], millimeter wave imaging [8], and x-ray imaging [9,10].CS achieves this by overcoming the Nyquist sampling limit, requiring only a few samples to reconstruct the original signal if it is sparse on some basis, such as the wavelet basis [26].Instead of measuring a full data set, the single pixel detector may only need to acquire a few well-chosen samples to achieve a reliable reconstruction.Consequently, sampling strategies [11] have become a topic of great interest, especially CS-based adaptive sampling algorithms [12].These are sequential sampling methods that improve the sampling efficiency by inferring which points to sample next using knowledge of prior samples.
Growing in popularity as the technology matures, millimeter wave (MMW) imaging techniques are increasingly attractive for non-destructive high-resolution imaging and radar applications such as through the wall imaging [13], concealed weapon detection [8,14,15], and foreign object detection [16,17].High-resolution MMW imaging requires broad bandwidths [18] and large apertures [14].Because of the unavailability of sensitive, low-cost detector arrays, MMW imaging relies heavily on synthesizing large apertures to acquire highresolution images.Synthetic aperture radar (SAR) and inverse synthetic aperture radar (ISAR) images are acquired by mechanically moving a single aperture [16,19] at the cost of slow acquisition speeds.Specifically, SAR images are formed by moving an aperture along a straight path and illuminating a stationary object, while ISAR images are either acquired by moving the aperture along a circular path around the target or by rotating the object in front of a fixed aperture.Although synthesizing a large aperture by mechanically moving a single pixel detector reduces the cost for a physically large aperture, mechanical scanning requires a long acquisition time, and mechanical stability becomes an issue as large volumes of spatialspectral data are acquired.
CS algorithms have already been applied to MMW SAR and ISAR imaging [17,[20][21][22].It was shown in [17,20] that compression can be done on the spatial and spectral data by random sampling and reconstructions using various CS algorithms.The primary random sampling strategies include random spatial sampling, random spectral sampling, and random spatial-spectral sampling.Random spatial-spectral sampling provides the best compression rate because of the huge amount of spatial-spectral data acquired.However, if the objective is to minimize acquisition time by reducing the amount of mechanical scanning, random spatial sampling is far superior to random spectral and random spatial-spectral sampling, even though the data compression rate is lower.
When MMW imaging is being used to image through obscurants, the complexity of the scene is unknown [12][13][14][15].Consequently, random sampling strategies are problematic because the number of samples needed cannot be predetermined.Adaptive sampling provides a compelling alternative that optimizes the measurement scheme regardless of the scene complexity, thereby minimizing the amount of spatial scanning required, especially for ISAR.In a proof-of-concept demonstration of an adaptive sampling CS algorithm applied to MMW SAR imaging, Mrozack et.al accurately located point scatterers in as many steps, but the algorithm was of limited utility because of its requirement that the exact number of scatterers in the scene must be known in advance [12].Here we introduce an ISAR method that adaptively selects each measurement location based on the Bayesian compressive sensing (BCS) framework [23,24] and needs no prior information on the scene complexity.The need for BCS applied to MMW ISAR was borne out of a need for a faster way to obtain high quality reconstructions of complex targets using a single heterodyne MMW transceiver.The current methodologies require mechanically scanning the target sequentially through many angles.This mechanical scanning is slow and further wastes time by measuring many angles that provide little critical information for the reconstruction.In contrast, range is determined by rapid electronic frequency sweeps, so there is no need for BCS in the range dimension.Therefore, by developing a methodology to identify which few angles can provide the most critical information needed for a reconstruction, we can avoid measuring angles that provide little additional information and speed up acquisition.We show that the adaptive algorithm converges faster than random sampling for simple targets and generates more reliable reconstructions for complex targets.In addition, the BCS framework allows the user to define stopping criteria without prior knowledge of the scene.
Method
Other 2D imaging modalities directly sample in the x-y domain to render a target scene with reflectivity ( , ) x y ρ , but ISAR (and computational tomography (CT)) are sampled in the angle-range domain ( r) , f θ .Figure 1(a) illustrates how this is done for traditional MMW ISAR imaging in which either the target or the imaging system is rotated and the interrogating beam from a transceiver horn antenna is directed toward the center of the scene at a sequence of angles i θ and frequencies ν .As shown in Fig. 1(b), the same transceiver antenna measures the reflected signal ( , ) i F θ ν using a vector network analyzer to record the S11 parameter at each angle over a range of frequencies.The transceiver antenna is then moved to the next angle 1 i θ + and the measurement process is repeated until ( , ) where θ Δ and r Δ denotes the sampling interval in the angular and range domain, respectively, 2 / M π θ = Δ is the total number of angular samples, and / N r R = Δ is the total number of range samples.From Eq. (5) a vectorized expression of the model becomes .g = Hf (6) where g is the measurement vector of size MNx1, H is the MN x MN forward matrix, and f is the MN x1 vector as the target for reconstruction.For the ISAR scenario illustrated in Fig.To see how this is done, we next discuss the adaptive sampling strategy under the compressive sensing framework.Instead of sampling at all M angles, f can be sampled incompletely with only K samples (K<M) and be reconstructed using the compressive sensing algorithm.We assume f needs to be sparse or sparse under wavelet representation for compressive reconstruction.Thus we chose to represent f by the Haar wavelets, resulting in the wavelet coefficients ω .We denote the Haar wavelet basis as an MN MN × matrix B , so that = f Bω .The compressive measurement can thus be expressed as = g HBω = Φω , ] is the projection matrix relating the wavelet coefficients ω to the compressive measurements, and a single measurement is simply Note that here we use the notation k g instead of m g to differentiate that the former corresponds to the compressive measurement.The sparse measurement constitutes an ill-posed inversion problem, which is commonly solved via L-1 regularization, i.e.
{ }
where λ is the regularization coefficient that controls the sparseness of the estimation result.
However, the L-1 regularization only provides a point estimate, which gives no access to adaptively choosing the optimal next measurement.Others [23,24] have formulated the CS inversion from a Bayesian learning point of view, providing a posterior density function on the signal under reconstruction.Besides the improved accuracy of point estimation, the full posterior density function results in confidence intervals on the estimation and thus provides an objective to optimize the next measurement.Therefore the BCS framework is not only able to reconstruct the original data compressively using significantly fewer measurements, it also provides a way to find the next projection 1 + k r after each measurement.This property of the BCS framework makes it a popular choice to achieve adaptive sampling in many imaging modalities.This paper focuses on how to use the full posterior density function to form the next projection 1 + k r in the projection matrix Φ , then how to realize the corresponding sampling scheme 1 + k h in ISAR.The theoretical framework for the BCS algorithm is next outlined, following [23,24], to explain how it reconstructs the original signal and formulates the optimization method for choosing the next measurement angle.
The compressive measurement can be expressed as the projection of the significant wavelet coefficients s ω , the remaining insignificant wavelet coefficients, e ω , and the measurement noise n m .Thus the measurement g can be expressed as .
The elements in the generalized measurement noise n are approximated as zero-mean Gaussian distribution with variance 2 σ .Thus, the likelihood for the measurement is For a compressive sensing problem where the signal is assumed to be sparse in the wavelet basis, a sparseness prior must be placed in the Bayesian formulation.A widely used sparseness prior is the Laplace density function [21,23,24].The conventional CS conversion resulting in the solution of Eq. ( 7) can be seen as the maximum a posteriori (MAP) estimation of ω [23,24].However, using the Laplace density function as a sparsity prior results in a Bayesian inference that cannot be calculated in closed form because the Laplace prior is not conjugate to the Gaussian likelihood shown in Eq. ( 9).Previous authors [23,24] have provided a solution using the relevance vector machine (RVM) framework that imposes a hierarchical prior with similar properties but are conjugates to the Gaussian likelihood in Eq. ( 9).A zero-mean Gaussian prior is first defined on ω , ( ) .
where i α is the inverse variance indicating the precision.The second level is a Gamma prior on the hyper-parameter α , ( ) The final prior on ω is thus derived by marginalizing over α , ( . Note that the integral in Eq. ( 12) results in the Student-t distribution [24].This prior function can thus promote sparseness by choosing the right values for a and b so that it reaches maximum when i ω = 0.
For the noise in the measurement, i.e. n in Eq. ( 8), a similar hierarchical prior is placed with 0 α as the inverse variance of the noise,
The marginal likelihood for α and 0 α can be derived by marginalizing over the weights ω , following a type-II maximum likelihood (ML) procedure [24], i.e., (16) where i μ is the i-th posterior mean from Eq. ( 8), and [23,24].α and 0 α are then used to calculate Σ and μ using Eq. ( 14) again, which constitutes an iterative process that updates these parameters until convergence occurs.The fact that α and 0 α can be updated iteratively indicates that initializing the a, b, c, d parameters for the hierarchical Gamma priors in Eq. ( 12) and Eq. ( 13) is not necessary in this case.Thus, setting them equal to zero is equivalent to enforcing a uniform prior on α and 0 α [23].
Since ) where the diagonals of Σ indicate the level of accuracy (or uncertainty) on the reconstruction of elements in f [23].
The ability to measure uncertainty provides criteria for selecting the next best measurement to minimize the total number of samples needed to reconstruct the signal with significant fidelity.The differential entropy is one criterion that satisfies the purpose [23,24], where 0 S(f) ( ) With a new measurement, Φ is modified by adding a new row 1 T k r + ; thus, the new entropy after the next measurement derived in [23] is The goal of making the next possible measurement becomes defining is minimized, so the maximum of r Σr should be pursued.In addition, as pointed out in [14], maximizing r Σr is equivalent to maximizing the variance of the next measurement, since The user may specify a stopping criterion for the algorithm to end based on the desired amount that ) .
An example illustrating how the projection vector k +1 h is realized under a hypothetical traditional BCS framework is presented in Fig. 2(a).The colored points in Fig. 2(a .The formation of the forward matrix H is thus visualized in Fig. 2(c) where each row corresponds to a measurement, and multiple points in each row indicate multiplexing.This type of sampling is usually realized with a coded aperture [11] to modulate which pixels to sample.However, this sampling strategy is not practical for ISAR imaging because the transceiver can only measure from one angle at a time.The sampling described in Eq. ( 21) requires measuring multiple angles and ranges at the same time, requiring the kind of extensive mechanical movement that we are trying to avoid.As noted above, ISAR imposes a constraint when maximizing 1 r Σr : all range data are taken for a fixed angle during each measurement, and this is illustrated in Fig. 2(b).Given that our goal is to minimize the amount of mechanical movement, the sampling choice is thus limited to selecting the next angle i θ .Due to this constraint, the actual next projection for ISAR imaging cannot be the exact eigenvector that enforces the maximum of 1 r Σr , and we lose the advantage of multiplexing the measurement.As a result, the transfer function should be described as , r .
Figure 2(b) and 2(d) illustrates the sampling strategy by visualizing the transfer function for the ISAR measurement and its corresponding vectorized form, the transfer matrix.The difference between the typical BCS framework and the application of BCS to ISAR is that there is no multiplexing in the latter [11], so more measurements are required.Therefore, to select the next optimal measurement in ISAR, we have to define a library of all possible 1 K + h matrices under to the predefined sampling intervals and choose the one that maximizes the sum of Σr .For example if the user defines 360 samples along the angular domain, such that the angular sampling interval is 1° and M = 360, the library consists of 360 possible matrices from the beginning.Given the Σ from current measurements, the next measurement is chosen from the remaining 359 angles.Figures 2(c) and 2(d) emphase the difference between the theoretical BCS framework and the ISAR BCS framework that is constrained from multiplexing.Under a hypothetical BCS framework with complete freedom to multiplex, the size of the H matrix with K measurements is K x MN, and since K can be a very small number, the compression rate may be maximized.However, for the ISAR application the BCS framework has a limited ability to multiplex and can only adaptively choose among K angles, so the size of the H matrix becomes a much larger KN x MN.
Experiments and results
Two sets of experiments were performed to test the adaptive sensing algorithm.For both experiments, the sample objects are mounted on a motorized rotational stage.The objects are illuminated by a 15 cm diameter beam folded by and collimated with a mirror.The beam is reflected by the sample object and measured by a single stationary detector.Figure 3(a) presents a schematic drawing of the experiment setup.The source radiation is generated by a transceiver module frequency swept from 75 to 110 GHz.The reflected signal is analyzed with a network analyzer.The spatial measurement is done by rotating the target on the rotational stage, then obtaining the spectral data and evaluating its Fourier transform.The sample under investigation is a 3D printed nautilus-shaped dielectric cylinder, shown in Fig. 3(b).The cylinder is 20 cm tall with outer radius 30 mm and inner radius 14 mm. Figure 3(c) plots the spatial-spectral (i.e.angle-range) data measured at 3000 locations equally spaced between from 0o to 300o.The 180° orientation is depicted in Fig. 3(a), and the strong specular reflection from the flat surface is easily seen in Fig. 3(c).The range-varying structure between 90°-270° is produced by reflections from the inner cylinder cavity as it rotates closer to the source.The ISAR image can be reconstructed by performing filtered back-projection using the inverse Radon transform on the measured spatial data.The results, shown in Fig. 3(d), indicate a faithful reconstruction of the outer half-cylinder surface but a less accurate reconstruction of the "eclipsed" flat surface because of multiple reflection.The full sampling measurements and reconstruction shown in Figs.3(c) and 3(d) constitute the reference to which the adaptively sampled and reconstructed images will be compared.
The adaptive sensing algorithm described here is based on a Bayesian compressive sensing framework [23,24], which requires the sparsity of the signal.However, the target under investigation may not satisfy the sparsity requirement.We perform a wavelet transform (Haar) on the measured signal in the range domain, which is relatively sparse compared to the signal in the spatial domain, shown in Fig. 4(a).The wavelet transform provides a representation of the signal in a multi-scale basis [26].For wavelets with lower order scales, the original signal is represented by a slowly varying basis that produces a few coefficients with large values.For higher order wavelets, the original signal is represented by a version of the original basis scaled with faster variation and smaller windows, generating a large number of coefficients with much smaller values.In Fig. 4(a), the wavelet coefficients for the target are shown in a vectorized form.The lower order coefficients correspond to the part of the signal with slower variations, whereas the higher order coefficients correspond to faster variations and finer features.The spectrum of the wavelet coefficient also implies the sparsity of the target.The reconstructed signal after 25 adaptive measurements is shown as an offset to the original signal in Fig. 4(a).From the comparison between the coefficients of the full measurement and that of the compressed measurement, it can be seen in Fig. 4(a) that the reconstruction with the first 25 measurements selected by the adaptive algorithm can restore the original signal.The minor difference in higher order coefficients can be tolerated because they are likely to be the result of noise in the measurement and contribute only a small percentage in terms of energy.The first 25 measurements in the range domain are reconstructed and back-projected to form the target image shown in Fig. 4(b).The white outline depicts the true contour of the target.To assess the reconstruction quality of the compressed adaptive measurement, we measured a 0.944 structural similarity index (SSIM) [25] when comparing to the full measurement reconstruction (x, y) ρ in Fig. 3(d).It should be pointed out that the BCS algorithm allows the user to specify a stopping criteria when the change of uncertainty in the measurement is not increasing any more, yet we let the simulated experiment run without stopping to analyze convergence.The goal here is to provide information that can be used to define appropriate stopping criteria.
The idea of using CS techniques to reconstruct the target image from a small subset of all possible measurements has already been demonstrated [1,2].It was shown that randomly selecting measurement angles in an ISAR experiment might substantially reduce the number of measurements and thus reduce the amount of mechanical scanning.Using an adaptive algorithm for compressed spatial sampling further improves the efficiency by achieving an acceptable reconstruction faster.
To demonstrate this advantage, we have simulated the experiment 60 times, each starting at a different angle, and compared it to the results of 60 different experiments using random measurements.The result is shown in Fig. 4(c), using the normalized mean squared error as the benchmark, assuming that the target and its associated ( , ) x y ρ are unknown to the observer.It can be seen that the adaptive algorithm approaches low MSE faster than random sampling.Although the advantage tends to saturate when the number of measurements approaches the allowed maximum, the narrow error bars imply that the adaptive algorithm tends to be more stable when approaching convergence as the adaptive algorithm exploits the asymmetry of the target to select which measurements will most increase the information content.A histogram of the number of measurements required to reach 15% MSE further demonstrates the advantage of the adaptive algorithm over random sampling, as shown in Fig. 4(d).Although it happens that random sampling may converge faster than the adaptive algorithm in some cases, the adaptive algorithm is overall much more reliable and converges more consistently.This is essential when the target has unknown complexity and the number of measurements to be made must be predetermined.
It can also be noticed in Fig. 4(d), there was one outlier for the adaptive sampling that used all 60 measurements to reach the 15% MSE mark.The outlier is a consequence of the sampling constraints described earlier.Since ISAR has no multiplexing ability, each measurement is constrained to a single angle, and the algorithm must maximize the sum of values that may sometimes produce the same sum of Σr , and their corresponding angles may differ significantly.Consequently, the algorithm may sometimes choose an angle that does not aid convergence.
Choosing the next measurement angle to maximize variance necessarily minimizes the new entropy, as shown in Eq. (19).In other words, the algorithm chooses to maximize "new" information with each new measurement.To understand the selection process better, we show in Fig. 5(a)-5(d) four different stages of one simulated adaptive measurement, i.e. the 4th, 8th, 12th, and 16th adaptive measurement, respectively.The colored lines indicate the four measurement angles added in each sequence.Note that the early measurements are relatively evenly spaced in angle but that the later measurements have identified the thinner "eclipse" section for further scrutiny.These intuitive choices illustrate the algorithm's ability to recognize the part of the target with the most angle-dependent variation in signal.Are there conditions where adaptive sampling does not have a significant advantage over random sampling?Consider a cylindrical target for which all measurement angles return the same signal.In this case, each measurement would introduce the same amount of uncertainty, and the adaptive algorithm would simply scan every angle.Thus, in scenes with cylindrical symmetry, the adaptive algorithm will not outperform the random sampling strategy.x y ρ is reconstructed in Fig. 6(d).Although the geometry of this target is simpler than the previous experiment, Fig. 7(a) reveals that the wavelet coefficients of the measurements in this second experiment are less sparse than that of the first experiment, in part because of the cylindrical symmetry.Figure 7(b) shows the reconstruction of the target with 35 measurements from the adaptive algorithm, which has an SSIM index of 0.917 compared to the benchmark in Fig. 6(d).
As predicted, the symmetry and non-sparsity impose difficulties for the adaptive algorithm.To compare the adaptive algorithm with the random sampling method, we again simulated 60 different experiments, each starting at a different measurement angle.As expected, the MSE progression of both methods shows that the adaptive algorithm did not outperform the random sampling algorithm as in Fig. 7(c).The two algorithms tend to approach a given MSE at the same rate.We believe this is due to the geometry of the target, for which most angles introduce a similar amount of uncertainty, so random selection is not an inferior methodology in this scenario.However, the adaptive algorithm still has the advantage of being more consistent, which is shown in Fig. 7(c) as the uncertainties associated with adaptive sampling are smaller.In addition, the histogram in Fig. 7(d) indicates how many measurements each algorithm took to achieve an MSE of 15%.The adaptive algorithm tends to achieve this goal in 45-50 measurements, whereas random sampling was less reliable by taking 35-55 measurements.
To characterize the advantage BCS provides, it is not enough to quantify how many fewer measurements were required to achieve a certain MSE.We must also consider the associated time and complexity introduced by the algorithm.According to the proposed method for selecting the next measurement, T r Σr must be calculated for all remaining possible angles so that the best 1 K + r may be selected, but the extra time for the matrix multiplication to take place is not always negligible.Two factors affect the calculation time: sampling in the scene increases and illumination from new directions provides little additional information.However, even in these scenarios the convergence is more consistently and reliably reached by adaptive methods.
iFFigure 1 (
Figure 1(c) plots an example of ( r) , f θ , which bears a resemblance to the sinograms acquired in CT imaging as scattering elements in the scene rotate closer or farther from the transceiver.Since the range data represents the strength of the reflected signal in a given direction, ( r) , f θ is related to ( , )x y ρ
Figure 1 (
d) illustrates the reconstruction obtained by applying the FBP algorithm to the range data in Fig.1(c).
Fig. 1 .
Fig. 1. a) Imaging configuration for traditional ISAR imaging.b) Example of a measurement at a single angle in the frequency domain ( , ) i F θ ν and the range data in the spatial domain obtained by Fourier transform.c) Complete range data of all angles ( r) , f θ .d) Reconstruction
1
1(a), the transceiver measures range data by means of a frequency sweep at angle m θ , so if a single element in g is denoted ) of a point in the angle-range domain.The objective is to minimize the time required to obtain f.
the signals being reconstructed are the wavelet coefficient of the original signals, where ω = f B , the expectation and covariance of the posterior density function can respectively be derived as ( ( )
r
should be designed by performing an eigendecomposition of Σ and letting 1 K + r be the eigenvector of the largest eigenvalue[23].Recall that the measurement and target are related by T to measure the target in its original non-wavelet basis, we use the relationship ) indicate which points are being sampled in each measurement: orange points depict the transfer function of the k-th measurement ,
Fig. 2 .
Fig. 2. a) Visualization of the next measurement under a hypothetical traditional BCS methodology.Colored points indicate the points sampled in step k (orange) and k + 1 (green).b) Visualization of the next measurement for ISAR imaging.c) Vectorized form of 1 K + h for
Fig. 3 .
Fig. 3. (a) A schematic drawing of the experiment.(b) Drawing of the target half-cylinder nautilus.(c) The range signal measured at 300 angles separated at 0.1 degrees per measurement.The fully-sampled reconstruction is treated as the ground truth for the simulated experiments.(d) The ISAR reconstruction via the filtered backpropagation method with all 3000 measurements.
Fig. 4 .
Fig. 4. (a) The reconstruction of the wavelet coefficient of the signal after 25 measurements.(b) The ISAR reconstruction via filtered back projection on the data with 25 measurements.(c) The average MSE comparison of adaptive spatial sampling and random spatial sampling for 60 simulated experiments.Error bars indicate the standard deviation of the MSE.(d) Histogram of how many measurements the adaptive and random sampling has taken to reach 15% MSE.
r
Σr .Since the constraint may not allow the algorithm to choose the highest eigenvalue of Σ and associated eigenvector 1 K + r , the algorithm's more limited choices may be among lesser 1 K + r
Fig. 5 .
Fig. 5. (a)-(d).Results from the adaptive algorithm after 4, 8, 12, 16 measurements respectively.Lines with different colors indicate the measurements made during each stage, i.e., white lines are the first 4 measurements; blue lines are the 5th-8th measurements.These figures demonstrate the process of adaptive selection.
Fig. 6 .
Fig. 6.Image of the target metal posts a) from the side and b) from above.c) The range signal measured at 300 angles separated at 0.1 degrees per measurement.d) The ISAR reconstruction via back propagation.This can be observed in a second experiment involving a simpler target with cylindrical symmetry, composed of four metal posts that appear as point scatterers in the image.The configuration is shown in Figs.6(a) and 6(b), and the range-angle data , ( ) f r θ is shown in Fig. 6(c) from which ( , )x y | 7,529.2 | 2017-02-06T00:00:00.000 | [
"Mathematics"
] |
Performance Analysis for Cooperative Jamming and Artificial Noise Aided Secure Transmission Scheme in Vehicular Communication Network
Vehicular communication has emerged as a supporting technique for improving road traffic safety and efficiency in the intelligent transportation system (ITS). However, the wireless vehicular communication links may suffer from an eavesdropping threat due to the wireless broadcasting nature and high-mobility of vehicles. In practice, artificial noise (AN) assisted beamforming scheme can be utilized for fighting against multiple malicious eavesdroppers. Unfortunately, channel estimation errors caused by the high-mobility of vehicles may lead to noise leakage at the legitimate receiver, thus resulting significant loss in the secrecy performance. In this paper, a joint cooperative jamming and AN aided secure transmission scheme is proposed in vehicular communication network by considering the imperfect channel state information(CSI). In this scheme, cooperative jammers are utilized for further enhancing physical layer security. We derive the closed-form expressions of the connection and secrecy outage probabilities in the presence of AN leakage and signal offset using a stochastic geometry approach. Furthermore, the proposed scheme is capable of maximizing the secrecy throughput in terms of relative vehicular velocity for balancing both the reliability and security of the legitimate link. We further comprehensively analyze the effect of key system parameters on secrecy performance through asymptotic analysis. Finally, the effectiveness of the proposed scheme is validated by numerical results.
Introduction
Vehicular communication is believed as a emerging technology to improve the road safety, transport efficiency and driving experience in the intelligent transportation system (ITS) and future autonomous transport system [1,2]. Messages can be disseminated quickly by exploiting the paradigm of vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communications in vehicular network [3]. However, due to the broadcast nature of wireless medium, malicious vehicles may eavesdrop or jam the vehicular communication links for their own profit, which can threaten driving safety and jeopardize ITS efficiency [4,5]. Therefore, the information security is a key issue in the development applications of vehicular network [3,4,5]. This motivates the research on vehicular communication from the perspective of communication security [6]. 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 for the proposed scheme are providesd by MATLAB in Section V. Finally, the conclusions are drawn in Section VI.
Notations: We use bold lowercase and uppercase letters to denote column vectors and matrices, respectively. I n denotes the n × n identity matrix. P r {·}, · , |·|, and (·) T denote probability, Euclidean norm, absolute value, and transpose, respectively. exp (λ), T (N, λ) and CN µ, σ 2 denote exponential distribution with parameter λ, gamma distribution with parameters N and λ, and circularly symmetric complex Gaussian distribution with mean µ and variance σ 2 , respectively. L (·) denotes the Laplace transforms of a random variable. Finally, C m×n denotes the m × n complex number domain.
1 System Model Figure 1 Joint CJ and AN aided secure transmission model. Legends: Alice aims to transmit confidential message to Bob, in the presence of randomly located passive eavesdropper Eves trying to capture the confidential information. In addition, there also exist cooperative jammers (Charlies) emit interference signals to confuse Eves.
As shown in Fig. 1, we consider a joint CJ and AN aided secure transmission model in vehicular communication network, where a vehicle (Alice) aims to transmit confidential message to another legitimate vehicle (Bob), in the presence of randomly located passive eavesdropper vehicles (Eves) trying to capture the confidential information. In addition, there also exist cooperative jammers (Charlies) emit interference signals to confuse Eves. Note here Charlies act as pure cooperative jammers without information forwarding [25]. The sets of Eves and Charlies are defined as K = {1, 2, . . . , K} and C = {1, 2, . . . , C}. For convenience, we refer to the k-th Eve as E k and the c-th Charlie as C c . We assume that each Charlie and Alice are equipped with N c and N a antennas, respectively. Each Eve and Bob are all equipped with single antenna [26]. Without loss of generality, the spatial locations of Eves and Charlies are denoted as characterized by two independent homogeneous Poisson Point Processes (PPPs) Φ e and Φ c with the intensities λ e and λ c over the two-dimensional plane, respectively.
All the communication links undergo a standard path-loss characterized by the exponent α and the channels are quasi-static Rayleigh fading, where the fading coefficients are assumed to vary from one block to another, while keeping constant during a transmission block for simplicity [18,28]. All fast fading channels from Alice to Bob and E k are denoted by h a,b ∈ C Na and h e,k ∈ C Na , respectively, and those from Charlie to Bob and E k are denoted by h c,b ∈ C Nc and h c,k ∈ C Nc . We assume Bob estimates the intended channel with estimation errors [16]. In this case, we use a first-order Gauss-Markov model to depict the fast fading variation [29], the exact intended channel h a,b can be modeled as where the estimated value h a,b ∼ CN 0, ρ 2 I Na is independent of the channel estimation errors e a,b ∼ CN 0, (1 − ρ 2 )I Na . We consider ρ ∈ [0, 1] as channel estimation accuracy [16]. Note that ρ = 0 indicates that no CSI is obtained at all , 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 while ρ = 1 means a perfect channel estimation. For the Jakes' fading model, ρ is given by ρ = J 0 (2πf d T ) , where J 0 (·) is the zero-order Bessel function of the first kind, T is the block duration time, and f d = νf c /c is the maximum Doppler frequency with c = 3 × 10 8 m/s, ν being the relative vehicular velocity, and f c being the carrier frequency [16]. For simplicity, the CSI of Charlies and Eves are available [28]. Specifically, we assume that h e,k ∼ CN (0, I Na ) and h c,k ∼ CN (0, I Nc ).
Secure Transmission Scheme
For confusing Eves while ensuring a secure transmission, Alice adopts the ANaided beamforming transmission strategy to emit confidential information along with AN. Let [w a , W a ] constitute an orthogonal basis, where w a = h * a,b / h a,b is the beamforming precoding vector with h a,b being the estimate of channel h a,b , and W a ∈ C Na×Na−1 denotes an AN beamforming matrix onto the null-space of h a,b , i.e., h H a,b W a = 0. The AN-aided transmitted signal vector s a can be formulated as where θ ∈ [0, 1] is the ratio of information-bearing signal power to Alice' total transmit power P a . Note that θ = 1 indicates the secrecy beamforming without AN, and θ = 0 denotes that the confidential information transmission is suppressed.
x ∼ CN (0, 1) indicates the secret message for Bob. z a ∈ C Na−1 is an AN vector with distribution CN (0, I Na−1 ). Concurrently, the zero-forcing technique is utilized at Charlies. These external jamming signals generated by Charlies will further enhance security performance [25]. The jamming signal s c at each Charlie should be properly designed to jam Eves while eliminating the additional interference at Bob. Therefore, s c can be design as where P c denotes the transmit power of each Charlie. T c ∈ C Nc×(Nc−1) constitutes an orthonormal basis for the null-space of h c,b , i.e., T c is a Gaussian jamming signals vector. Alice and Charlies simultaneously transmit confidential and jamming signals. The received signals at Bob and E k can be respectively expressed as where d a,b , d e,k and d c,k denote the propagation distance from Alice to Bob, from Alice to the E k , and from the C c to the E k , respectively. n b ∼ CN 0, σ 2 b and n k ∼ CN 0, σ 2 e are independent variables denoting the terminal Gaussian noises . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 According to (4)-(5), the signal-to-interference-plus-noise ratios (SINRs) at Bob and E k can be given by where P a θ e ab w a 2 and P a (1−θ) e a,b W a 2 /(N a − 1) denote signal offset caused by the channel estimation error and AN leakage, which give rise to a serious reduction in security performance. According to stochastic knowledge, we obtain that . The SINRs γ b and γ e,k are changed dynamically by channel estimation accuracy ρ and power allocation ratio θ. As such, capacities of the k-th eavesdropper link and the legitimate link can be expressed as In consideration of the non-colluding scenario, the maximal eavesdropping capacity depends on the maximal capacity among all the Eves, i.e., C E = max k∈Φ E {C e,k }.
Secrecy Performance Analysis
In this section, secrecy throughput is introduced as a crucial performance metric for evaluating the reliability-security rate of the legitimate link (bps/Hz) [16,32,33]. Adopting Wyner' wiretap encoding scheme [14], we use R b and R s to denote the transmitted codeword rate and secrecy rate, respectively. Furthermore, the redundant information rate R e =R b −R s is used to provide secrecy against Eves. Therefore, the secrecy throughput T can be given by where the P top denotes the connection outage probability (COP) and P sop denotes the secrecy outage probability (SOP).
Secrecy Outage Probability (SOP)
A secrecy outage inevitably occurs when the capacity of the equivalent wiretap link exceeds the redundant information rate, i.e., C E < R e . Therefore, the SOP P sop is given by β Re = 2 Re − 1, and (a) is obtained by utilizing the probability generating functional (PGFL) of PPP [33]: According to [36], the CDF ofγ e,k is expressed as: Thus, for the SOP of the E k , we have Pr γ e,k > I e,k β Re d α e,k |Φ c = (1 + β Re Φ) where s = . Therefore, by substituting (16) into (15), we can obtain: Pr γ e,k > I e,k d α e,k |Φ c = (1 + β Re Φ) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 By plugging (17) into (13), we can obtain the SOP as shown in (18) Pr γ e,k > I e,k d α e,k |Φ c rdr) Paθ . In especial, the thermal noise can be neglected in the interference-limited network [37], i.e., σ 2 e = 0. We can further obtain the simple expression of SOP, i.e., P int sop , as follows: From (19), it is easily observed that the expression of SOP is inversely proportional to the cooperative jammer density λ c . Therefore, the secrecy performance can be enhanced by increasing λ c . In contrast, we can easily obtain that the P int sop is an increasing function with respect to the eavesdropper density λ e . In addition, the P int sop increases as the power allocation ratio θ increases. This is due to the fact that a higher θ denotes a lower power allocated to the AN for confusing Eves.
Secrecy Throughput
Using the definition given by (9), we can obtain a closed-form expression for the secrecy throughput in the interference-limited network, as shown in (20).
Numerical Simulation Results and Discussions
In this section, several numerical results are provided to verify the theoretical analysis. In particular, the effects of key system parameters such as: the number of transmit antennas N a and N c , the relative vehicular velocity v, the ratio of λ c /λ e , and the power allocation ratio θ, on security performance are presented in the figures below. Unless otherwise stated, the following main simulation parameters are adopted [27]: P c = P v = 30 dBm, α = 4, and N a = N c = 4. Additionally, R b = 5 bps/Hz, R e = 3 bps/Hz, and ρ = 1 We observed that as the parameter θ increases, the P cop is always decreasing for different number of antennas N a . The results match the analytical expression in (12) very well. Furthermore, by fixing the parameter θ unchanged, adding transmit antennas can be to the benefit of decreasing the COP. It is because that increasing the transmit power of the information-bearing signal or adding antennas is beneficial to improve connection performance. 3 presents the COP of legitimate link P cop versus the relative vehicular velocity v for different transmitted codeword rate R b . It is shown that increasing the parameter R b will weaken the connection performance of the legitimate link. Furthermore, the connection performance can be weakened significantly when the relative vehicular velocity v increases. It is due to the fact that channel estimation errors caused by high-mobility of vehicles in dynamic vehicular network may lead to noise leakage at the legitimate receiver, thus resulting significant loss in the connection performance. Fig. 4, it is observed that the SOP P sop declines rapidly at fist and then tends to stabilization with increasing the number of antennas N from all considered jammer power. It is because that when the parameter N becomes large, increasing the number of antennas is conducive to improving secrecy performance. However, when the parameter N becomes sufficiently large, there also exists secrecy performance floor phenomenon. Hence, the result confirms the accuracy of our asymptotic analysis at high N in (25). Furthermore, one can readily observe that increasing the power of cooperative jammer can be also beneficial to enhance secrecy performance in Fig. 4. 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 Figure 5 The SOP versus power allocation ratio θ for different density ratio. Legends:(λc = 0.1λe, λc = 0.5λe, λc = λe, λc = 5λe). Fig. 5 shows the relationship between the SOP and the power allocation ratio θ for different density ratio of cooperative jammers and Eves. We can observe that the SOP P sop increases as the parameter θ increases. This is because that a higher θ denotes a lower transmission power allocated to the AN for confusing Eves. Furthermore, the SOP P sop is shown to decrease with the increase of the density ratio of cooperative jammers and Eves. It is because that when the density ratio increases, more cooperative interference signals can be used for guaranteeing security. 6 shows the secrecy throughput versus power allocation ratio θ for different density ratio of cooperative jammers and Eves. As shown in Fig. 6, it is observed that the security throughput rises at first and then decrease as the power allocation ratio θ increases. This implies that there exists an optimum θ * for maximizing security throughput. This is due to that the power allocation ratio has a reliability-security tradeoff. A smaller θ stands for allowing more transmission power allocated to AN signal, which obtains a higher security performance while impairing the reliability performance. Conversely, a larger θ stands for allowing more power allocated to information-bearing signal, which obtains a higher reliability performance while impairing the security performance. This reveals that selecting an appropriate θ can improve the secrecy throughput. Furthermore, the tendency that the secrecy throughput declines as the density ratio decreases can be observed in Fig. 6, which can be attributed to the increasing SOP. The result is consistent with Fig. 2 and where λ c = 10λ e , θ = 0.6. For a given jammer power P c , it can be noticed that the secrecy throughput decreases as the relative vehicular velocity v increases, which implies that the imperfect CSI caused by high-mobility of vehicles is not conducive to enhancing the secrecy throughput performance. Furthermore, as the power of cooperative jammer increases, a prominent increase in the security throughput can be observed. As expected, the joint CJ and AN aided secure transmission scheme always outperforms the without CJ transmission scheme. This means that the cooperative jammers are utilized for further enhancing physical layer security.
CONCLUSION
In this paper, a joint CJ and AN aided secure transmission scheme with imperfect CSI has been investigated in vehicular communication network. In this scheme, the closed-form expressions of the COP and the SOP have been provided. We have quantified the secrecy throughput performance for maintaining the reliability-security tradeoff of the legitimate link. Meanwhile, there exists an optimal solution of power allocation that yields the maximum security throughput under different relative vehicular velocity. Furthermore, the performance of the proposed scheme has been demonstrated by numerical results. More importantly, our results indicated that the cooperative jammers can be utilized for further enhancing physical layer security, which will be to the benefit of the information security in vehicular communication network. | 4,321.8 | 2020-07-28T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Screening the Presence of Non-Typhoidal Salmonella in Different Animal Systems and the Assessment of Antimicrobial Resistance
Simple Summary In this study, for the first time in Chile, we compared resistance profiles of Salmonella strains isolated from 4047 samples from domestic and wild animals. A total of 106 Salmonella strains (2.61%) were isolated, and their serogroups were characterized and tested for susceptibility to 16 different antimicrobials. This study reports 47 antimicrobial-resistant (AMR) Salmonella strains (44.3% of total strains). Of the 47, 28 corresponded to single-drug resistance (26.4%) and 19 to multidrug resistance (17.9%). The association between AMR and a subset of independent variables was evaluated using multivariate logistic models. Interestingly, S. Enteritidis was highly persistent in animal production systems; however, we report that serogroup D strains were 18 times less likely to be resistant to at least one antimicrobial agent than the most common serogroup (serogroup B). The antimicrobials presenting the greatest contributions to AMR were ampicillin, streptomycin and tetracycline. Abstract Salmonella is a major bacterial foodborne pathogen that causes the majority of worldwide food-related outbreaks and hospitalizations. Salmonellosis outbreaks can be caused by multidrug-resistant (MDR) strains, emphasizing the importance of maintaining public health and safer food production. Nevertheless, the drivers of MDR Salmonella serovars have remained poorly understood. In this study, we compare the resistance profiles of Salmonella strains isolated from 4047 samples from domestic and wild animals in Chile. A total of 106 Salmonella strains (2.61%) are isolated, and their serogroups are characterized and tested for susceptibility to 16 different antimicrobials. The association between antimicrobial resistance (AMR) and a subset of independent variables is evaluated using multivariate logistic models. Our results show that 47 antimicrobial-resistant strains were found (44.3% of the total strains). Of the 47, 28 correspond to single-drug resistance (SDR = 26.4%) and 19 are MDR (17.9%). S. Enteritidis is highly persistent in animal production systems; however, we report that serogroup D strains are 18 times less likely to be resistant to at least one antimicrobial agent than the most common serogroup (serogroup B). The antimicrobials presenting the greatest contributions to AMR are ampicillin, streptomycin and tetracycline. Additionally, equines and industrial swine are more likely to acquire Salmonella strains with AMR. This study reports antimicrobial-susceptible and resistant Salmonella in Chile by expanding the extant literature on the potential variables affecting antimicrobial-resistant Salmonella.
Introduction
The global non-typhoidal salmonellosis burden in 2010 was estimated at 93.8 million cases and 155,000 deaths per annum, of which more than two million were accounted for in the region of the Americas [1]. The causative agent belongs to Salmonella's genus, which has two species: enterica and bongori [2]. S. enterica contains six subspecies: enterica, salamae, arizonae, diarizonae, houtenae and indica [2]. According to the White-Kaufmann-Le Minor scheme, Salmonella has been classified into more than 2600 serovars [3]. This scheme is based on the somatic or O-antigen antigenic reactions to determine the Salmonella serogroup and further reactions of the flagellar antiserum for H1 and H2-antigens [4]. These antigenic reactions are used in conjunction with each other to classify the antigenic formula and, consequently, to determine Salmonella serovars [5,6]. Forty-six variants of the O-antigens are contained in the scheme [6]; however, only a few serogroups are the most frequently reported Salmonella serovars in humans and animals [7]. Frequent Salmonella serovars have been fundamentally found in serogroup D (i.e., Enteritidis, Dublin and Javiana), serogroup B (i.e., Typhimurium and Heidelberg), serogroup C1 (i.e., Infantis and Montevideo) and serogroup C2-C3 (i.e., Kentucky and Newport) [6].
Antimicrobial resistance (AMR) in Salmonella is a leading worldwide concern. The World Health Organization (WHO) has classified fluoroquinolone-resistant Salmonella as a high-priority target for new drug development [8]. Several outbreaks attributed to AMR Salmonella have been reported globally [9][10][11]. Surveillance systems in developed countries have shown variability in the current trends of Salmonella AMR due to their divergent tendencies when referring to the antimicrobial agent provoking resistance and the serovar type [12]. For instance, data from the National Antimicrobial Resistance Monitoring System of Enteric Bacteria (NARMS) from the United States showed that specific serovars (i.e., 4, 5, 12:i:-, Typhimurium, Newport and Heidelberg) were resistant to at least three antimicrobial classes [13]. This is commonly known as the presence of multidrug-resistant (MDR) strains. Similarly, serovars isolated in Europe exhibited varying prevalence of AMR [12], with S. Enteritidis more susceptible to different antimicrobial groups tested in up to 84.7% of examined isolates [12]. Likewise, S. Enteritidis tested at the US NARMS presented low AMR levels, accounting only for 0-7.7% of resistant strains [13].
In Chile, the presence of Salmonella strains has been reported in different sources including water used for irrigation [14], backyard flocks [15], wild animals [16][17][18][19] and chicken eggs [18]. One study depicted a prevalence of MDR Salmonella of 13% in 35 strains obtained from irrigation water [14]. Additionally, a recent article reported one strain of S. Infantis in a wild owl at a Chilean rehabilitation center, which was not only MDR Salmonella but also an extended-spectrum ß-lactamase producer [16]. These reports have raised the importance of studying environments to better understand the distribution of Salmonella and the mechanisms by which it acquires resistance to different antimicrobials. This study aims to compare the prevalence of Salmonella AMR by exploring how it differs between serogroup types, sampling sources and different animal populations (e.g., domestic and wild animals) in Chile [16].
Study Sites and Sample Collection
We selected different study sites to represent a wide diversity of animals and environments where wild animals and livestock are found in Chile. Samples were classified into eight categories to represent a broader spectrum of characteristics, such as family and animal types, and geographic locations within the country ( Table 1). Fourth of the categories were classified as closed systems where animals have restricted movement, limited contact with other animal species and are feed by humans. The closed system category is composed of food production animals from eight industrial dairy farms (160 samples) [20] and 10 swine farms (182 samples), domestic animals from an equine veterinary hospital (545 samples) [21] and wild animals from three wildlife rehabilitation centers (405 samples) [19]. Samples were also obtained from animals living free-range outdoors, which were potentially being fed with unsafe remains of food, such as human waste [22]. The open-system category comprised samples obtained from 13 free-range dairy farms (260 samples), 329 backyard chicken farms (2188 samples), four sites of wetland birds (271 samples) and five other backyard animal sites (36 samples) ( Table 1).
Salmonella Isolation
A total of 4047 fecal samples were collected from the sites described above between 2013 and 2017 for isolation of Salmonella spp. Two sample types were collected: (i) sterile containers from fresh cows, swine, horses and wild animal feces that were deposited in the environment, and (ii) samples directly obtained from animals (horses, wildlife and birds) ( Table 1). For the latter, rectal or cloacal samples were extracted using Cary Blair transport media (Copan Italia Spa, Brescia, Italy). All samples were collected under sterile conditions and transported to the laboratory at 4 • C for further processing.
The microbiology isolation method used in this study has been previously described [21]. Samples were cultured in buffered peptone water (Beckton-Dickinson, Franklin Lakes, NJ, USA) at 37 • C for 24 h. One-hundred microliters were transferred into Rappaport Vassiliadis (RV) media (Beckton-Dickinson) supplemented with novobiocin (20 mg/mL), and 1 mL was transferred into tetrathionate (TT) (Beckton-Dickinson) supplemented with iodine. Consequently, these samples were incubated at 42 • C for 24 h. Finally, a 100 uL aliquot of each selective enrichment broth was streaked into an XLT-4 agar plate (Beckton-Dickinson) and incubated at 37 • C for additional 24 h. Four presumptive Salmonella colonies were selected from each plate and transferred into a non-selective enrichment media tryptic soy agar (TSA) (Beckton-Dickinson). Subsequently, they were confirmed as S. enterica strains via polymerase chain reaction (PCR) of the invA gene using previously described primers [23]. Confirmed isolates were stored at −80 • C with 20% glycerol concentration, and confirmed as S. enterica strains via polymerase chain reaction (PCR) of the invA gene using previously described primers [23].
Serogroup Characterization and Antimicrobial Susceptibility
A previously described molecular method for predicting the Salmonella serogroup was used in this study to determine the serogroups of Salmonella strains [7]. DNA extraction was conducted using a DNeasy Blood and Tissues kit (QIAGEN; Hilden, Germany). DNA was quantified, and its quality was tested using a 250/280 ratio in a MaestroNano Spectrophotometer (Maestro-gen, Taiwan). DNA was then adjusted to a concentration of 25 ng/uL. Multiplex PCR was conducted to identify the serogroup of each isolate [7]. The scheme detected the most common serogroups; B, C1, C2-C3 and D. Isolates that could not be classified were reported as not determined (ND) ( Table S1).
Map of the Isolation Sites
Salmonella strains with AMR and those that were MDR were georeferenced using a geographical positioning system (Gpsmap62s, Garmin, Olathe, Kansas). Subsequently, their spatial distribution was mapped using color codes and risk maps on ArcGIS 10 software (Esri, Redlands, CA, USA) based on the coordinates from each isolation site.
Statistical Analyses
We employed three analyses throughout the study to explore the presence of AMR and MDR Salmonella strains. (i) Firstly, we examined Salmonella's presence and serogroups by employing descriptive statistics and exploratory analyses based on screening features of our sample. (ii) Secondly, we computed a multivariate hierarchical analysis to examine different Salmonella clusters with AMR, drawing on antimicrobial resistance and susceptibility profiles. (iii) Thirdly, two different regression models were employed to look at the association between Salmonella with AMR and our independent variables: serogroup type (B, C1, C2-C3, D, E1, N/D) and animal category (industrial swine, wetland birds, equine veterinary animals, free-range dairy animals and backyard chickens).
(ii) Multivariate hierarchical analysis. These analyses used agglomerative hierarchical clustering algorithms to evaluate how different resistance profiles were built upon. We used the Euclidean distance as the clustering method. Data were grouped into different rows and according to their standardized values in respect to their serogroup classification (B, C1, C2-C3, E1, ND), animal and system types (bird, chicken, cow, horse, small mammals, swine, reptile; closed and free-range) and antibiotic susceptibility (to amikacin, amoxicillin, ampicillin, cefoxitin, ceftriaxone, ciprofloxacin, chloramphenicol, streptomycin, gentamycin, kanamycin, trimethoprim/sulfamethoxazole and tetracycline). The results of the model are displayed in a dendrogram plot for data interpretation and visualization. This analysis was carried out using Infostat Software, version 2017 (https://www.infostat.com.ar/, accessed on 10 March 2021).
(iii) Regression models. First, a multivariate logistic model was used to understand how different biological and animal characteristics were associated with Salmonella AMR for the total number of samples collected (n = 4047). The biological and animal characteristics used as independent variables included animal categories (swine farms, backyard chickens, free-range dairy, equine veterinary hospitals, wildlife rehabilitation centers) and Salmonella serogroups (B, C1, C2-C3, D, E1, ND). The backyard chicken category and serogroup B were used as reference variables due to these groups being the most prevalent within our sample. Second, a parallel logistic model was performed using the same variables but restricted to only Salmonella-resistant and non-resistant strains. This sub-analysis was employed to better explore whether independent variables affect the prevalence of AMR amongst Salmonella isolates. All statistical analyses were performed on RStudio software (http://www.R-project.org, accessed on 10 December 2020) version 3.5.3. Our final model, which included all explanatory variables, was selected based on the best goodness-of-fit according to the Akaike Information Criterion (AIC = 116), compared to models including only one variable at a time (AIC > 130). The statistical significance of each explanatory variable was assessed using Wald's test and utilizing a 95% confidence level (variable statistically significant if p-value < 0.10).
We did not incorporate other characteristics, such as system type (open or closed) and sampling type (environment and animal), due to multicollinearity problems with the variable of system category. Nevertheless, we carried out an exploratory analysis by dropping the latter variables and adding the new set of covariates (system and sampling types).
Salmonella Presence and Serogroups
In this study, 4047 samples from different animals were analyzed for Salmonella presence; 2.61% (n = 106) tested positive for Salmonella ( Figure 1). We predicted the serogroup using the molecular scheme, but in 22 strains, the serogroup was classified as not determined (ND) ( Figure 1 and Table S1). The most common serogroup was the B-type with 48 (45.3%) strains found in all animals that tested positive ( Figure 1). Salmonella serogroup D (14.1%) was observed in 15 strains from wild animals in rehabilitation centers, wild birds in wetlands and backyard flocks. Similarly, Salmonella serogroup C1 (14.1%) was found in 15 strains from an equine hospital, wild animals in rehabilitation centers and backyard chickens. Salmonella serogroups C2-C3 (3.8%) were detected in four strains, while serogroup E1 (1.9%) was observed in two strains. Twenty-two strains (20.7%) isolated from wildlife were classified as not determined ( Figure 1 and Table S1).
Antimicrobial Resistance and Susceptibility
A total of 47 out of 106 (44.3% of total strains) Salmonella strains analyzed had AMR (n = 47). Only 28 of those were SDR, while 19 were MDR. The MDR strains were more frequently observed in closed systems (n = 13) compared to free-range (n = 6) ( Figure 1). Strains with AMR were obtained mostly from closed systems (n = 34). These comprised industrial swine (n = 8), equine veterinary hospitals (n = 20) and wildlife in rehabilitation centers (n = 6). On the other hand, AMR in free-range animals was also observed (n = 13): free-range dairy farms (n = 1), wetland birds (n = 1) and backyard chickens (n = 11), as shown in Figure 1. In general, the antimicrobials showing the highest levels of resistance were ampicillin (n = 27), streptomycin (n = 26) and tetracycline (n = 26) (see Figure 2). MDR strains were resistant to nine antimicrobials: ampicillin, amoxicillin/clavulanic acid, chloramphenicol, ciprofloxacin, streptomycin, gentamicin, kanamycin, trimethoprim/sulfamethoxazole and tetracycline ( Figure 2). Clustering of the antimicrobial resistance profiles identified in Salmonella strains was obtained using a conglomerate-hierarchical with a maximum Euclidean distance of 6.65 points between clusters. Our analysis identified arbitrary cut-off points within the distance spectrum. These specifications were employed to allow the largest number of clusters to be computed (3.32) within two groups of antimicrobial resistance profiles ( Figure 2), G1 and G2, which were located at approximately 4.7 distance points apart. G1 included, principally, MDR strains (clusters 2 to 3), while G2 included SDR strains (clusters 4 to 8). G1 grouped 10 strains belonging to closed systems, and all of them were either classified as serogroup B or not determined (ND). In this group, the most extensive antimicrobialresistance profile was reported as an ampicillin, amoxicillin/clavulanic acid, chloramphenicol, streptomycin, gentamicin, kanamycin, trimethoprim/sulfamethoxazole and tetracycline-resistant strain. On the other hand, cluster 1 grouped two different strains from Clustering of the antimicrobial resistance profiles identified in Salmonella strains was obtained using a conglomerate-hierarchical with a maximum Euclidean distance of 6.65 points between clusters. Our analysis identified arbitrary cut-off points within the distance spectrum. These specifications were employed to allow the largest number of clusters to be computed (3.32) within two groups of antimicrobial resistance profiles (Figure 2), G1 and G2, which were located at approximately 4.7 distance points apart. G1 included, principally, MDR strains (clusters 2 to 3), while G2 included SDR strains (clusters 4 to 8). G1 grouped 10 strains belonging to closed systems, and all of them were either classified as serogroup B or not determined (ND). In this group, the most extensive antimicrobial-resistance profile was reported as an ampicillin, amoxicillin/clavulanic acid, chloramphenicol, streptomycin, gentamicin, kanamycin, trimethoprim/sulfamethoxazole and tetracycline-resistant strain. On the other hand, cluster 1 grouped two different strains from wildlife in rehabilitation centers with extensive antimicrobial-resistance profiles. G2 included 30 resistant strains from closed and free-range systems classified within serogroups E, ND, B and C. These strains had a profile of limited resistance from one to two antibiotics. The most common resistance profile was triple-resistant to ampicillin, streptomycin and tetracycline (Figure 2). wildlife in rehabilitation centers with extensive antimicrobial-resistance profiles. G2 included 30 resistant strains from closed and free-range systems classified within serogroups E, ND, B and C. These strains had a profile of limited resistance from one to two antibiotics. The most common resistance profile was triple-resistant to ampicillin, streptomycin and tetracycline (Figure 2). Table 2 displays the descriptive statistics of our analytical sample. There is a high prevalence of the B serogroup, and most samples were collected from animals (not environmental) in both regression model samples. Diversely, the first model sample presents a large proportion of backyard chicken farms, whereas the group size is smaller in the sample used for Model 2. Wildlife staying at rehabilitation centers displayed the highest proportion in Model 2, and most animals came from closed spaces. Differently, most animals came from open sites in the Model 1 sample (Table 2). The results of our logistic regression are found in Figure 3. Model 1 shows that Salmonella strains from the equine hospital were 2.6 times more likely (odds ratio (OR) = 2.63; 95% CI = 0.95-7.25; p-value = 0.061) to be resistant to at least one of the antimicrobials tested, compared to backyard chicken farms (Table S2). Similarly, our results demonstrated that within industrial swine systems, Salmonella strains were 5.08 times more likely to become resistant than backyard chickens (OR = 5.08; 95% CI = 2.09-12.35; p-value ≤ 0.001). Moreover, Salmonella strains from serogroup D had 0.95 times lower probability of being resistant than the most common serogroup B (OR = 0.05; 95% CI = 0.01-0.47; p-value = 0.009). Likewise, serogroup C1 had a lower likelihood of becoming resistant compared to serogroup B (OR = 0.21; 95% CI = 0.05-0.84; p-value = 0.027). Industrial dairy farms were not considered due to a lack of variability over AMR prevalence and to avoid perfect multicollinearity. Our dependent variable indicates the presence of resistance to at least one antimicrobial for Salmonella. SE stands for standard error, whereas OR is for odds ratios. Pr > |z| is for p-value; a Some categories were dropped as they did not present variation of SDR prevalence (i.e., industrial dairy farms and other backyard animals); b Category dropped in Model 2 due to lack of variation of AMR prevalence. c Robust standard errors were estimated. * p < 0.1, ** p < 0.05, *** p < 0.01. Our dependent variable indicates the presence of resistance to at least one antimicrobial for Salmonella. SE stands for standard error, whereas OR is for odds ratios. Pr > |z| is for p-value; a Some categories were dropped as they did not present variation of SDR prevalence (i.e., industrial dairy farms and other backyard animals); b Category dropped in Model 2 due to lack of variation of AMR prevalence. c Robust standard errors were estimated. * p < 0.1, ** p < 0.05, *** p < 0.01.
Prevalence of AMR
On the other hand, Model 2 shows the same model after reducing the sample to Salmonella (+) strains. Industrial swine isolates were omitted, in contrast with Model 1, as this group did not display variation of the prevalence of AMR, showing perfect collinearity with our dependent variable (Figure 3a). Wildlife in rehabilitation centers and wetland birds had lower odds of acquiring resistant strains compared to backyard chicken farms (OR = 0.16, 95% CI = 0.03-0.89, p-value = 0.036; OR = 0.12, 95% CI = 0.01-1.123, p-value = 0.075, respectively). Regarding serogroups, our results were consistent with those observed in Model 1. Figure 3b compares both models from Figure 3a by plotting their respective ORs. There are no meaningful differences for the observed characteristics, except for wildlife in rehabilitation centers and the ND serogroup. Figure 3a depicts the average predicted prevalence of AMR by each model. On average, the presence of Salmonella with AMR in the samples collected from closed environments was threefold that of their open counterparts (for both models). On the contrary, Salmonella with AMR from animal samples collected in open spaces was twice that of closed sites for Model 2 ( Figure S1).
Our exploratory analysis partly supports these findings ( Figure S1). We included system type (open or closed), sampling type (environment or animal) and serotype group in Model 3 as independent variables to explain AMR (see Table 2). There is a link between samples collected at closed sites and the prevalence of AMR (OR = 3.90; 95% CI = 1.36-11.20; p-value = 0.011). No other association was found between the sampling type (collected directly from animals or fecal samples in the environment) and resistance levels (Table S2). We could not employ further analyses with MDR Salmonella due to a lack of variability and multicollinearity problems. However, the hierarchical analysis identified that most MDR profiles (grouped in G1 clusters 2 to 3) were different compared to the profiles grouped in clusters 4 to 8 (see Figures 2 and 3), and cluster 1 was set aside with the highest resistance profile (see Figure 2).
Discussion
Our study reports 47 antimicrobial-resistant Salmonella strains (44.3%). The presence of Salmonella strains was mainly observed in serogroups B and C1 within different animal populations in Chile. Coincident with our study, other studies carried out in Chile have found these common serovars to be Enteritidis, Typhimurium and Infantis (serogroup D, B and C1) [15,18,22,24,[27][28][29][30]. However, S. Enteritidis has represented the highest percentage of isolates [15][16][17][18][19]22]. In Chile, the Institute of Public Health has reported that S. Enteritidis is the most common serovar associated with human illnesses [27,28], accounting for 61% of all Salmonella cases [27]. In 2019, a number of cases of bacterial gastroenteritis caused by Salmonella spp. were reported in Chile [29]. Out of the total 212 outbreaks that occurred over time, 106 originated from Salmonella spp. [29]. Similarly to this report, a previous article [30] also indicated a high contamination by S. Infantis of chicken meat available for consumption in supermarkets. However, a low prevalence is reported in backyard poultry farms (approximately 1%) [15]. It could be hypothesized that the low isolation is due to the difficulty of recovering and identifying Salmonella spp., which further complicates our understanding of the risk of transmission and dissemination among animal and human populations [15,22]. The risk could be greater if these Salmonella strains have AMR and even more substantial if MDR strains are presented. Unfortunately, we could not obtain samples from poultry farms due to the strong biosecurity measures and the limited interaction between industry and academia, which is reflected in the lack of studies on these production systems. Yet, studies are needed to achieve a better understanding of the dissemination of Salmonella AMR. Therefore, as it is considered relevant, we set about exploring and describing the circulation of Salmonella AMR and MDR strains in animal systems in Chile.
Regarding antimicrobial testing, the most common resistance profile was found to be triple-resistant to the ampicillin, streptomycin and tetracycline strains. This is novel because they do not correspond to the frequent treatment received by domestic animals [31]. However, the amount of tetracycline used in marine-farmed salmon 0.7-1.5 (L/kg) cor-responds to one of the highest historic values reported by the National Fisheries Service (SERNAPESCA) in Chile [32].
Our results also suggest that S. Enteritidis is highly persistent in animal production systems. We reported that serogroup D strains were 18 times less likely to be resistant to at least one antimicrobial agent than the most common serogroup (serogroup B). Our results suggest that serogroups B and C1, including Salmonella serovars Typhimurium and Infantis, may have more significant facilities to acquire resistant genes [3]. Our results resemble those found by NARMS and European Union (EU) reports, which have stated that S. Enteritidis (serogroup D) had lower antimicrobial-resistance levels than other serovars [10,12]. Coincidentally, we observed through our clustering analysis that MDR strains were grouped in G1, which corresponded to serogroup B and closed systems (horse strains from equine veterinary hospitals and free-range cows). Moreover, cluster 1 grouped two strains from wildlife in rehabilitation centers with broad resistance profiles. In this sense, it has been previously demonstrated that wildlife thriving in anthropized landscapes, such as owls [16] and foxes [33] together with migratory birds [19], may constitute environmental reservoirs of AMR. Consequently, this might explain the acquisition of international clones of BLEA-producing E. coli and S. Infantis (CTX-M) with a broad resistome and virulome. Both strains have been previously detected in Chile and South America [16]. Wild species kept at this human-wild animal interface should be monitored with special attention in the future. Besides, the role of closed production systems-especially in meat production, which is affected by Campylobacter jejuni resistance [34], and marine-farmed salmon, which are affected by Piscirickettsia salmonis resistance [35]-should also be considered to control the dissemination of AMR and MDR strains in the environment. Examples of contamination may include the use of watercourses risking vegetable uptake and animal consumption [14].
In our results, we also reported that swine production systems were five times more likely to be resistant to at least one antimicrobial (OR = 5.08). This specific result coincides with the study employed by Vico et al. [36], who reported a high prevalence of non-typhoidal Salmonella (41.5%; 95% CI: 37.6-45.6%) and a further 86% of MDR strains. Another interesting result was obtained from equine veterinary hospitals, which were 2.63 times more likely to be resistant to at least one antimicrobial [37]. Potential stressful conditions and extensive use of antimicrobials are known to act as trigger factors for AMR in hospitalized animals [37]. To address the former, stewardship programs have been implemented in several countries, such as Switzerland, to avoid further antimicrobial damage to hospitalized animals [38].
Rather than an accurate estimate of the prevalence of AMR in different Salmonella serovars, our study can be seen as a point of surveillance of Salmonella AMR in a highly unequal country that plays an essential role in global food production [39]. Future research using systematic and more extensive sampling techniques in different animals' settings could help to more accurately estimate AMR prevalence across different closed systems. This process could help to understand the drivers of AMR and MDR Salmonella across farms and within country regions. Data on antimicrobial susceptibility, mainly taken from hospitalized domestic animals, are very limited in numerous developing and highly unequal countries such as Chile [40]. The use of antimicrobials in industrial food production is a significant concern affecting countries worldwide [39]. For instance, the amount and frequency of antimicrobials used on wild animals in captivity has remained mostly unknown, even though a recent study has shed some light on the prevalence of AMR [41]. In addition, salmonellosis surveillance in food, animals and environments has been poorly reported in the literature, with barely any focus on developing and highly unequal countries [42]. It is essential to estimate the risks of AMR to humans, which has remained mostly as a future, but not present, challenge [42]. A recent publication reported an increased prevalence of AMR in zoonotic pathogens, such as Salmonella and Escherichia coli, in several developing countries [16]. In that study, Chile was classified as an AMR hotspot. Furthermore, the study highlighted the critical nature of delivering point prevalence surveys to enhance the data quality of AMR trajectory modelling [16].
This study has some shortcomings. First, the low number of samples in each system category produced extensive 95% CIs. Second, there was a lack of variability across independent variables, which can be attributed to selection bias. However, this is one of the first studies looking at the presence of Salmonella AMR in different animals in Chile by employing a massive testing scale. Third, there was a significant number of backyard chicken farms, which can lead to sampling inconsistencies. Nonetheless, flocks of chicken are one of Chile's largest animal populations due to the country's the high demand for chicken meat. Fourth, the numbers of equine and wild animals were relatively high compared to their specific herd populations in Chile. Fifth, multidrug resistance lacked variability; thus, we could not employ further regression analyses for MDR Salmonella. Future research is necessary to refine each category's influence by using a more balanced and greater sample size.
In summary, a better-standardized format to upload and generate data on Salmonella AMR could facilitate future estimations of its burden on low-resourced and/or the most impoverished countries [43]. The present study's results help to improve the existing mechanisms for collecting data on AMR within the region. Moreover, this study evidences the factors that are highly associated with Salmonella AMR in a Chilean subsample of wild animals. Stewardship schemes and a well-guided national program aiming to reduce Salmonella AMR levels in wild and domesticated animals are essential to help contain further transmission. This study has implications for other similar countries in the Latin American context with similar environments and characteristics, such as Argentina, Brazil, Colombia, Mexico and Venezuela [44].
Conclusions
This study describes Salmonella isolates obtained in different animal systems in Chile by characterizing their serogroups and antimicrobial resistance levels. We report a significant presence of Salmonella AMR in animal systems. Among them, the serogroups B and C1 were more frequently observed in Salmonella AMR and MDR Salmonella. S. Enteritidis (serogroup D) had lower AMR levels than other serovars, such as serogroup B. The factors identified in this study could be used for the design of public policies that aim to tackle AMR in the animal industry using a One Health perspective.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ani11061532/s1: Figure S1, Predicted AMR prevalence from Figure 3 by system and sampling types; Table S1, Master table of Salmonella strain isolated from animal systems; Table S2, Results of the multivariate logistic regression model exploratory analysis. | 6,906.2 | 2021-05-24T00:00:00.000 | [
"Biology"
] |
Toward More Efficient WMSN Data Search Combined FJLT Dimension Expansion With PCA Dimension Reduction
With the rapid development of 5G technology, the scales and dimensions of the data that are processed by Wireless Multimedia Sensor Network (WMSN) applications will be larger than ever before. Such high-dimensional data search becomes very difficult for WMSN applications. This paper proposes a more efficient WMSN data search algorithm that is based on the fruit fly olfactory neural framework, combined with the Fast Johnson-Lindenstrauss Transform (FJLT) and the Principal Component Analysis (PCA), called Fast Johnson-Lindenstrauss Transform Combine Principal Component Analysis-based Fly Locality-Sensitive Hashing (FP-FLSH). First of all, the data features are quantified numerically. Then, the fruit fly olfactory nervous system framework is used to project the data to a higher dimensional metric space using the low distortion projection FJLT. Finally, the dimensionality reduction process adopts PCA strategy to retain the maximum amount of information, and constructs its search index structure. Experiments are conducted on three larger scale benchmark data sets, and the results are as follows. Compared with the current mainstream search algorithms, the proposed method exhibits more efficient performance and can be effectively applied to WMSN applications.
I. INTRODUCTION
Wireless Multimedia Sensor Network (WMSN) is a new kind of sensor network, which has been widely used in security monitoring, intelligent transportation, environmental monitoring and other fields. WMSN sensor nodes are equipped with cameras, microphones and other sensors, which can collect and process video, audio, image and other multimedia data from the physical environment. However, with the development of 5g technology, the dimensions and scale of these WMSN data are larger than before [1]. Such highdimensional data search becomes very difficult for WMSN applications. A large number of studies show that fast search [2]- [4] has great application potential for WMSN applications, so it is of great significance to build a search structure with good performance.
The associate editor coordinating the review of this manuscript and approving it for publication was Anandakumar Haldorai .
At present, many researches on WMSN data search have been performed in [5]- [8]. These studies show that WMSN data search consists of three steps: (1) Preprocessing the original monitoring data, (2) Building an index structure using the standardized data, and (3) Mapping the query object into the index structure to obtain the query result. Building a good index structure using standardized data is a basic step. There are some researches on index construction in [9]- [12]. They can be divided into spatial partitioning based methods, random projection based hash methods and learning based hash methods. Although these methods have made some progress in query accuracy, there are problems of storage space and calculation cost in high dimensional space (data dimension exceeds 100 dimensions), which are mainly reflected in three aspects: Problem 1: The conventional tree index structure [13], [14] can perform well for small-scale data search. However, the performance of these methods will degrade when processing WMSN data. VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ Problem 2: The random projection hash structure needs to build a long index code to achieve good search performance, but this method consumes considerable memory resources.
Problem 3: The learning hash structure requires a long training time and consumes considerable time resources while achieving good search performance.
Aiming at all these problems, this paper proposes a new data search method called the FP-FLSH. Unlike latest data search methods, the main contributions of this paper are the following.
1) The FP-FLSH method that is proposed in this paper uses a low distortion projection method called the FJLT for dimensional expansion. Based on the olfactory system of bionic fruit fly, the characterized WMSN data are projected to a higher dimensional metric space. 2) A high-quality index code solution is proposed. We use the PCA method to reduce the dimensions and retain the features with the most information. 3) This method has better robustness and retrieval precision for WMSN data, and can perform well without constructing long index codes. Compared with the FLSH method in Science [15], our FP-FLSH method is more efficient.
The rest of this paper is organized as follows. Section II describes the related work. Section III proposes the novel FP-FLSH algorithm. Section IV discusses the distance preserving properties of the FP-FLSH algorithm. Section V conducts extensive experiments and compares the results with some mainstream algorithms using three real large-scale data sets. The conclusion and future work are given in Section VI.
II. RELATED WORK
The research on index structure of WMSN data is critical in many areas, such as information retrieval, machine learning, and pattern recognition. In general, data index structures can be divided into three categories.
In the first category, the index structure is based on spatial partitioning. Among the most representatives of such methods one can mention the KD-Tree [13], R-Tree [14], etc. These algorithms perform well when the data dimension does not exceed 20 dimensions. However, when dealing with high-dimensional data, they encounter ''dimensional curses'', and the performances of these algorithms will significantly decrease to be even lower than those of linear queries [16]. To solve the problem of data search in high dimensional space, many scholars have studied the approximate neighbor search [17]- [19]. The use of the approximate neighbor search optimizes the time complexity of the algorithm with respect to the similarity query.
In the second category, the index structure is a hash method based on random projection. Among the most representatives of such methods one can mention the Locality-sensitive Hashing (LSH) [20], p-Stable LSH [21], Order statistics LSH [22], etc. Indyk of Stanford University proposed a method called LSH [20]. Here, the condition of hashing function family H and a hash function h belonging to H is determined as follows. Let r represent the distance between two points, p represent the probability of a hash collision at two points, D(x, y) represent the distance between point x and point y, X represent collection and Pr R represent the probability that two points are mapped to the same bucket after hashing, when a function R satisfies the following conditions, it is called (r 1 , r 2 , p 1 , p 2 )−sensitive, where r 1 < r 2 , p 1 > p 2 . Given ∀x, y ∈ X , we have the following: Given a set of points P in a metric space (X , D) containing n data points, the LSH algorithm will look for L different hash functions and map each data point x to a length L hash code: Datar of Stanford University proposed an improved LSH based on a p-stable distribution [21]. By adopting a stable distribution under different dimensions, the algorithm can adapt to different distance metrics. Subsequently, Mayank and Panigrahy of Stanford University successively proposed new search algorithms [23], [24] in Euclidean space. To map similar data points with higher probability to similar hash codes, traditional LSH usually requires a large number of hash tables to meet this requirement, but this undoubtedly increases the computational complexity and memory occupancy. To solve this problem, Order statistics LSH [22] is proposed by kave eshghi of Hewlett Packard Laboratories in the United States. This method uses the property of rank distribution to develop a locality-sensitive hashing family, which has good collision rate property for cosine measure, but it takes longer to process a query. The FJLT method [25] proposed by NirAilon and Bernard Chazelle of the Princeton University. This method is based on Heisenberg's principle of random fourier [26] change to preprocess sparse projection, but this low distortion nesting is only demonstrated in theory, and does not show its superiority in practical application.
In the third category, the index structure is a hash method based on learning. Among the most representatives of such methods one can mention the Kernelized Locality-Sensitive Hashing (KLSH) [27], Spherical Hashing (SPH) [28], Principal Component Hashing (PCH) [29], etc. Brian of UCBerkeley proposed the KLSH algorithm [27]. Here, the kernel method is introduced into the index structure. By mining the internal structure of the data, additional training time is accepted in order to increase the retrieval precision. Jae-PilHeo of Korea Advanced Institute of Science and Technology proposed the SPH [28]. Here, the hash bits are determined by projecting the data onto a hypersphere instead of a hyperplane to maintain the spatial consistency of the original data points. PCH [29] was proposed by Jing of Microsoft Research Asia. Here, the method of principal component hashing is introduced into the retrieval to enhance the robustness of the algorithm to different data distributions.
In the past few years, some new research works have emerged in this field, including the Deep Convolutional Hashing (DCH) method [30] that was proposed by Sapkota, the HashNet method [31] and the Deep Visual-Semantic Quantization (DVSQ) method [32] that was proposed by Cao. Although the index structure based on a deep neural network has certain advantages with respect to the retrieval accuracy, its training time for large-scale data sets is longer, and the training quality has higher sensitivity to network parameters; therefore, there are still obstacles to its practical application. Sanjoy proposed a novel random projection based on the FLSH method [15] in Science. The hash process of the fruit fly's olfactory nerve is used to simulate the data sets. There are still some problems with FLSH. We find that although the FLSH uses the fruit fly olfactory nerves to simulate the hash process, its random sparse matrix will cause great loss of similarity, and the winner-passing strategy is not a feasible method.
Therefore, this paper proposes the FP-FLSH algorithm that is based on the fruit fly olfactory neural framework combined with the FJLT mapping and the PCA algorithm. The method can minimize the similarity loss of data.
III. FP-FLSH: LOCALITY-SENSITIVE HASHING ALGORITHM A. OVERALL FRAMEWORK OF FP-FLSH
The FP-FLSH method proposed in this paper can provide an effective solution for WMSN data search, the method consists of three basic modules, as shown in Fig. 1.
1) DATA FEATURE PROCESSING
Characteristic processing of WMSN data. In this process, the feature vectors of the image data are composed by extracting various features, including color features, texture features, shape features, etc. Audio data are transformed into feature vectors by extracting frequency-domain features and wavelet features. The feature extraction of video adopts Word2vec model, which does not depend on video tag.
2) FJLT MAPPING
The FJLT mapping consists of the Sparse-JL Matrix, the Walsh-Hadamard Matrix, and the Diagonal Matrix. The WMSN data after characterization are projected to higher dimensional spaces by FJLT mapping.
3) RETAIN MAIN COMPONENTS BY PCA
The projected data adopt PCA strategy to preserve the data features with the largest amount of information to reduce the similarity loss of data objects and construct high-quality index codes.
B. RANDOM PROJECTION BASED ON FJLT DIMENSION EXPANSION
The FJLT is a low distortion linear map with a random distribution of R d to R m . The random embedding ϕ ∼ FJLT is usually composed of three real-valued matrices, as shown in Fig. 2. ϕ = PHD, where P and D are random matrices, and H is a decision matrix. It is well known that sparse matrices are not suitable for low distortion nesting. Especially the input data are sparse vectors, the variance of the estimator is too high, which inevitably causes the data to lose a large amount of precision. Matrix P is essentially a sparse matrix, and so we cannot use it as a fast JL transform alone. We use the Heisenberg principle of the Fourier transform to overcome this obstacle. The mapping HD ensures that the data are smooth. Since the HD is orthogonal, the Euclidean norm is kept constant. Therefore, this Fourier transform-based random projection will minimize the distortion and enhance the distance preservation of the conversion matrix, resulting in a conversion matrix with higher time efficiency and precision. 1) Matrix P: It is a sparse matrix. It is an m × d matrix and its elements are independently distributed. m = αd, where α is the parameter, d is the original dimension. With probability 1 − q set P ij to 0 and otherwise set P ij from a normal distribution of expectation 0 and variance q −1 . The sparsity constant q is given as: therefore, P ij can be expressed as: 2) Matrix H : It is a d × d normalized matrix: here, i, j is the dot product of the m − bit vector i, j in binary (modulo 2) VOLUME 8, 2020 3) Matrix D: It is a diagonal matrix of d × d, with probability 1/2 set D ii to 1, otherwise set D ii to -1. For the input data X ∈ R n×d , u = HDX : The FJLT mapping is used to project the data into a higher dimensional metric space, thereby bionically simulating the olfactory nervous system of the fruit fly. Compared with the sparse mapping, the FJLT mapping has better coverage, which can better preserve the accuracy of the data.
C. PCA ALGORITHM BASED FEATURE EXTRACTION
We keep the characteristics of data to the greatest extent for the data that are projected into high-dimensional space. Then, the issue returns to how to extract the principal components of the projected data dimension, which is also a dimensionality reduction process. Here, the key is how to choose the base that retains the most information. Alternatively, if we have a set of ndimensional vectors, and we need to reduce them to k-dimensional (k < n) vectors. Then, we must choose k bases to retain more features. Suppose there are only two fields a and b, and then they are grouped into rows by matrix X : Then, multiply X by the transpose of X and multiply the factor by 1/m: it can be seen that the two elements on the diagonal of this matrix are the variances of the two fields. According to the matrix multiplication algorithm, this process can be generalized. There are m n-dimensional data records. We arrange them in an n × m matrix by column and let C = 1 m XX T , where C is symmetric and the diagonal line of the matrix is the variance of each field. The i-th row j-th column and the j-th row i-th column elements are the same, thus representing the covariance of the two fields i and j. Based on the above derivation, we find that achieving the optimization goal is equivalent to diagonalizing the covariance matrix. The elements other than the diagonal are zero, and the elements are arranged on the diagonal from top to bottom so that optimization is achieved. We further observe the relationship between the original matrix and the covariance matrix after the base transformation. Let the covariance matrix corresponding to the original matrix X be C, and P is a set of matrices consisting of rows. Let Y = PX and Y is X to P base-transformed data. Thus, the optimization goal becomes finding a matrix P that satisfies PCP T as a diagonal matrix, and the diagonal elements are arranged in order from largest to smallest. Then, the first k rows of P are the bases that are sought. According to the above, it is known that the covariance matrix C is a symmetric matrix. In linear algebra, the real symmetric moment has a series of very good properties.
1) The eigenvectors corresponding to the different eigenvalues of the real symmetric matrix must be orthogonal. 2) If the characteristic vector is λ, the multiplicity is r.
Then, r linearly independent eigenvectors must correspond to λ so that the r eigenvector units can be orthogonal. Therefore, a real symmetric matrix of n rows and n columns must find n unit orthogonal eigenvectors. Let the n eigenvectors be e 1 , e 2 , . . . , e n . Then, form them into a matrix by column: Then, for the covariance matrix C, the following conclusions are made: where is a diagonal matrix, and the diagonal elements are the eigenvalues corresponding to each feature vector (which may be repeated). Therefore, we obtain the matrix P that we needed: P is a matrix in which the feature vectors of the covariance matrix are unitized and arranged in rows, where each row is a feature vector of C. If P aligns the feature vectors from top to bottom according to the eigenvalues in , then multiply the matrix consisting of the first k rows of P by the original matrix X . The resulting matrix Y is the one that holds the most information.
As for the determination of the number of principal components, namely the value of k, if k is too large, the compression rate of the data is too low, in the limit case, k = n (only rotation projects different bases). On the contrary, if k is too small, the approximation error is too large. We usually consider the percentage of variance to determine the value of k. In general, let λ 1 , λ 2 , . . . , λ n represent the eigenvalues of (in order of large to small), so that λ j is the eigenvalue of the corresponding eigenvector e j . If we retain the first k components, the variance of the reservation can be calculated as k j=1 λ j n j=1 λ j . We can determine the minimum the value of k that satisfies the following conditions: in practical applications, according to the conclusion of [33], τ is a very small value. PCA is used to preserve the data features with the most information, and the main information remains after the dimension shrinks. This can maximize the query accuracy of the query mechanism in the WMSN.
D. FRUIT FLY OLFACTORY NEURAL FRAMEWORK
University of California scholars Sanjoy, Charles and Sake released the FLSH algorithm in Science [15]. The FLSH algorithm is inspired by the olfactory nervous system of a fruit fly, and it is a new variant that combines the olfactory nervous system of the fruit fly with locality-sensitive hashing. The whole process is mainly divided into three steps. The first step is to preprocess the input data, which is used in many computing pipelines. The second step involves the expansion of the number of neurons. Here, we amplify the Projection Neurons (PNs) into Kenyon Cells (KCs) through a sparse matrix M . M is an m × d sparse binary random matrix and m = 20d. Each row in M ij is independently selected. For each row in M ij , p represents the probability: In the third step, strong inhibition feedback using a single inhibitory neuron is used by the winner-passing mechanism. The values of the first k Kenyon cells (KCs) are retained (usually k is 5% of m) and the rest are set as zero. This winnerpassing mechanism produces a sparse vector Z ∈ R m (called a label) with the following: However, we find that sparse binary random matrices are not suitable for low-distortion data projection, and the neuron suppression strategy of the winner-passing mechanism largely sacrifices the similarity between data objects. Therefore, this paper models FLSH and proposes the framework model of the fruit fly olfactory nervous system. The model is divided into two parts. Firstly, the extracted data are mapped to higher dimensional spaces. Secondly, the data feature with the most information are retained and used as indexes. The most important contribution of the localitysensitive hashing strategy using the olfactory nervous system framework of the fruit fly is to change the traditional localitysensitive hashing construction method using data indexes, establish the connection between the cognitive neural body and the approximate neighbor search. This provides a new idea for hash search. The label generated by the fruit fly neural framework model can maintain the expected distance of the input odor, minimize the loss of similarity, and optimize the accuracy of the FP-FLSH algorithm.
Lemma 1: If two inputs n, n ∈ R d get projected to z, z ∈ R m , formula (2) shows that q is a sparse constant, we have: by formula (14), the distance between the data after mapping is determined by the parameter m and parameter q. The value of the parameter q is determined by the formula (2), so we only discuss the parameter m. when parameter m is large enough, the variance z is tightly concentrated around its expected value, such that formula (15) shows that the first step of fruit fly olfactory neural framework produces tags that preserve distances of input data in expectation. In the second step, we retain the data features with the largest amount of information through PCA, and retain the main information after the dimension is shrunk.
Together, these results demonstrate that the fruit fly olfactory neural framework model can improve the accuracy of the FP-FLSH algorithm. The computational complexity of the entire FP-FLSH algorithm is determined by the third step that uses PCA method, which preserves the top k data features with the most information. Its time complexity is O(kn 2 ). Therefore, the computational complexity of the FP-FLSH algorithm has a square order relationship with the amount of data.
IV. DISTANCE PRESERVATION ANALYSIS OF FP-FLSH
The conventional method of hashing is to reduce the dimension of data, so as to achieve the purpose of fast search, but the similarity of data will be greatly affected in the process of hashing. In the FP-FLSH algorithm that is proposed in this paper, in order to establish the index structure, the data are first projected to a higher-dimensional metric space using the FJLT projection transformation. Then, using the PCA method, the data features with the most information are retained and used as an index. To illustrate that our proposed FP-FLSH algorithm has good distance preservation, we introduce the concept of the maximum regression similarity.
Definition: Set the size of data before dimension change as n and dimension as l. The dimension after hash mapping and dimension reduction is g, the similarity loss parameter is ε 1 , and its corresponding maximum regression similarity is TR 1 = 1 − ε 1 . This function represents the maximum retention of data object similarity. The dimension after hash mapping and dimension reduction is S = v * g, and the similarity loss parameter is ε 2 , and its corresponding maximum regression similarity is TR 2 = 1 − ε 2 . Then the Algorithm 1 FP-FLSH Input: Query Data Q = (q 1 , q 2 , .., q d ) ∈ R 1×d n: the amount of data r: the number of approximate nearest neighbor searches d: initial dimension of data k: the number of hash codes retained by PCA m: the dimensions by FJLT mapping 1: Dataset Feature Processing→ X 2: X By FJLT mapping into Y = (y 1 , y 2 , .., y m ) ∈ R n×m y = FJLT (Q), Q ∈ X 3: Y By PCA into M = (m 1 , m 2 , .., m k ) ∈ R n×k p = PCA(y), y ∈ Y 4: R = Query(p, M , r) Output: R, R is the neighbor of Q difference value of the maximum regression similarity is TR = TR 2 − TR 1 .
Theorem: Data dimension expansion can get higher maximum regression similarity than data dimension reduction, in which the data dimension is set as l, the dimension after expansion set as V 2, and the dimension after reduction set as V 1. Proof: according to formula (16), when n and g are determined, the value of TR increases with the increase of v, that is to say, the maximum regression similarity of the extended dimension is better than that of the reduced dimension. When n and v are determined, the value of TR decreases with the increase of g, when g → +∞, TR → 0, So the similarity loss parameter is 0. According to the above theorem, we can conclude that the maximum regression similarity can be improved by expanding the dimension compared to reducing the dimension. Expanding the dimensions of the data can reduce the loss of similarity between data. Moreover, according to the findings of neurosensory (fruit fly olfactory) [34], the olfactory nerve of fruit fly will transmit different odors to more neurons to improve olfactory ability, so as to judge different odors. The algorithm in this paper expands the data dimension and has better distance preservation.
V. EXPERIMENT AND RESULT ANALYSIS A. EXPERIMENTAL PLANNING
In the experimental part, we test the performance of the proposed FP-FLSH algorithm on the approximate neighbor search task of WMSN data. We randomly select one percent of them as the query points, and the returned points are located in the first two percent of the nearest point set from the query points (as measured by the Euclidean distance), which are considered to be the query objects. Then, we approximate the neighboring data points. All data points in the database are sorted according to their Euclidean distance to the query. We repeated them 20 times and averaged the results of the searches to represent the accuracy for each set of experiments.
Next, we will perform three rounds of experiments on the algorithm according to the above criteria.
Experiment 1: Projecting the data to a higher dimension using the FJLT matrix. The projected dimension will directly affect the accuracy of the neighborhood search, select different α, and determine the optimal value of α. Experiment 2: The data are projected into the highdimensional space using the FJLT mapping and the sparse binary mapping, respectively. The neighbor search is directly performed, and the distance between the FJLT mapping and the sparse binary mapping is compared.
B. EXPERIMENTAL DATA SETS
To demonstrate the retrieval performance of the algorithm under different data distributions, we use WMSN data sets from three different fields for the comparison experiments. These three data sets are commonly used evaluation data sets in the WMSN field.
1) SIFT: The image data that contains 10,000 SIFT features, each of which is represented by a 128-dimensional vector. 2) MNIST: The handwritten digitally identified data contains 10,000 MNIST features, each of which is represented by a 784-dimensional vector. 3) GLOVE: The word data contains 10,000 GLOVE features, each of which is represented by a 100-dimensional vector.
Experiment 1:
This experiment is designed to investigate the magnified dimension m = αd after the data are projected through the FJLT matrix and determine the optimal value of α. During the experiment, α is set to 1, 2, 4, 6, and 10 for the three different data sets. In addition, using the PCA method, the reserved hash code lengths are 16-bit, 32-bit, and 64-bit. In this way, the optimal value of α is determined. That is, the optimal precision of the neighbor search is determined.
Through the experiments,the experimental results are shown in Fig. 3. The data are projected to a higher dimensional space by the FJLT matrix. The larger the α is, the higher the accuracy of the neighbor search. However, when α is greater than 6, which means that the data are projected more (a) Comparing the coverage of the two projection matrices using the SIFT data, (b) Comparing the coverage of the two projection matrices using the GLOVE data, and (c) Comparing the two projection matrices using the MNIST data. than 6 times, the accuracy of the neighbor search does not significantly increase. Moreover, the larger the dimension of the data expansion is, the more the time complexity and space complexity of the algorithm will increase. Therefore, α = 6 is the best value. We will use the FJLT matrix to project the initial data dimension to 6 times the original dimension, which will achieve the best results. Experiment 2: This experiment is designed to compare the distance preservation of the two mapping when expanding the data. During the experiment, FJLT mapping and sparse binary mapping are used to expand the dimension of the data to 1, 2, 4, 8, and 16 times their own dimension, then we directly perform the neighbor query and compare the distance preservation of the two mapping extension dimensions.
Through experiments, the experimental results are shown in Fig. 4. On the three different data sets, the FJLT mapping is better than the sparse binary mapping in the expandsion process of data features. Therefore, using the FJLT mapping to project data into a higher dimensional metric space is more suitable for the fruit fly olfactory neural network simulation algorithm. Compared with the traditional sparse mapping, the FJLT mapping has better coverage, which can better preserve the accuracy of the data.
Experiment 3 (Comparative Experiment):
The FP-FLSH algorithm is compared with the following algorithms. The experimental results are shown in Fig. 5. 1) LSH [20]: This algorithm is a hashing algorithm based on random projection. The projection vector maps similar data to the same hash bucket in Hamming space. 2) DSH [35]: Unlike existing random projection-based hashing methods, density hashing attempts to use the geometry of the data to guide the selection of the projection (hash table). 3) SPH [28]: The hash bit is determined by projecting the input data onto a hypersphere instead of a hyperplane to maintain the spatial consistency of the original data points. 4) ITQ [36]: It reduces the mapping error of the index structure by rotating the reduced dimension data. 5) FLSH [15]: The fruit fly olfactory system is utilized in the projection strategy and selection and the fruit fly olfactory system is combined with LSH. 6) RS-FLSH [37]: A new sample-based Drosophila localsensitive hashing model is proposed to simulate the randomness of the synaptic establishment between neurons. 7) BCH-LSH [38]: The source data are mapped to the hash space by using the characteristics of the BCH code design's distance. From the experimental results, we can see that the retrieval accuracy of FP-FLSH has achieved better results compared with the other LSH algorithms. Fig. 5 shows the performance VOLUME 8, 2020 of the algorithm in the comparative experiments on the three data sets as a function of the length of the code. On the three data sets, when a shorter coding length is taken, the search performance of FP-FLSH is still better than the mainstream hashing algorithms, which means that our proposed algorithm is suitable for scenarios with low memory usage. On the MNIST data set, the search performance of the FP-FLSH algorithm is better than the mainstream hashing algorithms, indicating that the FP-FLSH algorithm is also applicable to the higher-dimensional data set. It also shows the FP-FLSH algorithm proposed in this paper is a good combination of locality-sensitive hashing and fruit fly olfactory nervous system.
Through the comparison experiments on the three data sets, we can see that FP-FLSH has better robustness to data with different scales, different dimensions and different distributions. The FP-FLSH method can optimize the data index structure, which can improve the search performance of the algorithm.
In this section, we have carried out experiments on the proposed FP-FLSH method. Experimental results show that FP-FLSH method is suitable for WMSN data search.
VI. CONCLUSION AND FUTURE WORK
With the rapid development of current 5G technology, the WMSN has presented broad application prospects in many fields. Due to its large scale and high dimension, the practical application of the good performance index structure must be established. In this paper, a novel FP-FLSH method for WMSN data is proposed. The method utilizes the fruit fly olfactory neural framework and uses the lowdistortion random projection FJLT method to project the data into a higher-dimensional metric space. The PCA method is used to preserve the most informative features. Experiments on three real-world data sets have been carried out and show that the FP-FLSH algorithm that is proposed in this paper has some advantages compared with the latest LSH algorithm.
In future work, we will devote ourselves to studying the low distortion projection matrix and finding a more suitable projection matrix based on the fruit fly olfactory system to further improve our query accuracy. In addition, the secondary extraction strategy for data object features after projection will also be a focus of future research work. We will further enhance the performance of the WMSN data search in conjunction with the work of this group. | 7,561.4 | 2020-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
www.mdpi.com/journal/remotesensing Article A Novel Satellite Mission Concept for Upper Air Water Vapour, Aerosol and Cloud Observations Using Integrated Path Differential Absorption LiDAR Limb Sounding
We propose a new satellite mission to deliver high quality measurements of upper air water vapour. The concept centres around a LiDAR in limb sounding by occultation geometry, designed to operate as a very long path system for differential absorption measurements. We present a preliminary performance analysis with a system sized to send 75 mJ pulses at 25 Hz at four wavelengths close to 935 nm, to up to 5 microsatellites in a counter-rotating orbit, carrying retroreflectors characterized by a reflected beam divergence of roughly twice the emitted laser beam divergence of 15 µrad. This provides water vapour profiles with a vertical sampling of 110 m; preliminary calculations suggest that the system could detect concentrations of less than 5 ppm. A secondary payload of a fairly conventional medium resolution multispectral radiometer allows wide-swath cloud and aerosol imaging. The total weight and power of the system are estimated at 3 tons and 2,700 W respectively. This novel concept presents significant challenges, including the performance of the lasers in space, the tracking between the main spacecraft and the retroreflectors, the refractive effects of turbulence, and the design of the telescopes to achieve a high signal-to-noise ratio for the high precision measurements. The mission concept was conceived at the Alpbach Summer School 2010.
Introduction
The topic for the Alpbach Summer School 2010 in Austria was "New Space Missions for Understanding Climate Change".Early career scientists and engineers from many countries formed working groups to devise new space missions to tackle this challenging subject.Following the summer school, one mission concept was chosen for further development at a subsequent workshop in Obergurgl, the outcome of which is described in this paper.At the core of the mission chosen for further study was a novel active limb-sounding instrument, used as part of a multi-instrument measurement approach to observing three key climate change variables: water vapour, clouds and aerosols.
Water vapour in the upper troposphere-lower stratosphere (UTLS) region has an important role in determining the atmospheric temperature profile and tropopause structure [1,2].Observations are particularly challenging because of the low concentrations, and the poor vertical resolution of conventional passive nadir instruments in this region.However, this information is vital in general circulation models (GCMs) for the understanding and future prediction of the Earth's climate system.Aerosols and clouds, and their interactions, have also been identified as key climate change uncertainties, and observations of these in the UTLS are also limited [3].
Currently, several ground-based systems use differential optical absorption spectroscopy (DOAS) to detect low concentrations of trace gases by comparing path-integrated measurements over a continuous range of wavelengths.We propose an analogous spaceborne system, looking through the limb of the atmosphere at a few discrete wavelengths to detect differential absorption due to water vapour.By combining such a limb-looking LiDAR with a nadir-looking range-resolved LiDAR, this mission would provide water vapour and aerosol measurements in the UTLS and above of unprecedented accuracy and resolution.The addition of an imaging spectrometer operating in the visible and infrared range allows cloud microphysical properties to be retrieved in the same location, enabling process studies of cloud-aerosol-climate interactions and the reduction of key uncertainties in climate prediction.
A nadir water vapour LiDAR was proposed to ESA in 2002 as the WALES candidate Earth Explorer mission [4].The instrument concept presented in this paper builds on the laser and nadir LiDAR system development begun for the WALES mission.
Section 2 presents the scientific motivation and requirements for the new mission.Section 3 gives an overview of the mission concept, while Section 4 presents the preliminary performance calculations for the system.Section 5 presents the technical implementation in more detail and Section 6 outlines the main technical challenges associated with the concept.
The Importance of Upper Air Water Vapour, Clouds and Aerosols
Water vapour is the most important greenhouse gas, contributing about 50% towards the Earth's greenhouse effect, followed by clouds which contribute around 25% [5].The radiative balance of the atmosphere is particularly sensitive to changes in the UTLS, as this is where most of the Earth's thermal radiation escapes into space, and where cirrus clouds trap this outgoing radiation.For example, decreasing water vapour in the stratosphere has been identified as slowing the rate of increase of global surface temperature between 2000 and 2009 by about 25% [6], however it is not clear if the observed decadal variability in stratospheric water vapour is a driver or rather a response to climate change.
Water vapour in the stratosphere originates from transport through the tropopause region [7] and from the oxidation of methane, especially in the higher stratosphere.Transport is mainly induced by deep convection in the tropics passing the cold trap tropopause ( [6] and references therein).Absolute amounts are low, less than 10 ppmv [8], but nevertheless are of crucial importance, both radiatively and for atmospheric chemistry, for example through an indirect destructive effect on the ozone layer as shown in [9].Furthermore water vapour can be used as a tracer to study atmospheric dynamics such as stratosphere-troposphere exchange processes or the injection of trace gases into the stratosphere [10].
Important processes of this kind are for example the tropopause folds at midlatitudes or tropopause inversion layers [11], which are associated with strong peaks in the atmospheric stability above the tropopause.Their formation mechanisms are not well understood but the radiative effects of water vapour in combination with ozone may have a substantial impact [12].
The UTLS is also of interest because it is the region where aircraft fly, emitting carbon dioxide, water vapour, nitrous oxide and aerosols, forming contrails and having an impact on cirrus clouds and the radiative budget.Aerosol-cloud interactions are key here, and also poorly understood: the fourth report of the Intergovernmental Panel on Climate Change attributes the largest uncertainties and lowest scientific understanding of radiative forcing to aerosols and their direct and indirect albedo effects [13].
Aerosols in the UTLS affect the Earth's radiation budget by reflecting and absorbing incoming radiation.The scattering of shortwave radiation leads to a cooling of the climate system, and the absorption of longwave radiation to an increased heating rate [14].Aerosols also can affect the climate in an indirect way by modifying cloud properties, for example droplet size, quantity of cloud drops, cloud albedo, liquid water content and cloud lifetime.As a third effect, aerosols may have an influence on atmospheric chemistry through their indirect modification of the concentration of several gases, especially greenhouse gases [15].The stratospheric concentrations of aerosol particles are in general quite small compared to tropospheric aerosol, apart from the high concentrations found after volcanic eruptions.The aerosol optical thickness in the stratosphere is typically an order of magnitude smaller than in the troposphere [16].
Cirrus clouds are ice clouds found in the upper troposphere, and cover about 30% of the globe on average over a year [17].Cirrus clouds affect the Earth's radiation balance by reducing the amount of shortwave radiation reaching the surface, and reducing the longwave radiation emitted back to space.The net effect on the surface radiation budget is dependent on the microphysical properties of the clouds such as ice water content, number density and size of ice crystals as well as thickness.Theoretical calculations suggest that optically thin cirrus (e.g., a contrail) generally has a warming effect whereas optically thick cirrus has a cooling effect.The net effect is still uncertain although it is generally believed that cirrus has a net warming effect [18].
Other high altitude clouds are also likely to have a net warming effect.A colder stratosphere in combination with a higher water vapour content will increase the probability of the formation of ice clouds in the lower stratosphere: polar stratospheric clouds.These clouds play a major role in the heterogeneous ozone chemistry and the decrease in ozone in polar winter, and require continued observation and better characterization for model validation [19,20].Polar mesospheric clouds (noctilucent clouds) are thin layers of nanometer-sized ice particles that occur at even higher altitudes: between 82 and 87 km in the high-latitude summer mesosphere [21].Their formation is very sensitive to the mesopheric environment, and changes in their frequency of occurrence, brightness and altitude could be related to climate change [22], so continued observation of these phenomena is necessary.
In contrast to the tropospheric water vapour increase, which is well simulated in global climate models, modelled past and projected future stratospheric water vapour variations significantly disagree between different GCMs [23,24].Bias correction is also a challenge for the assimilation of observations of humidity [25]; accurate observations of stratospheric water vapour at high vertical resolution would allow more stringent testing of the models, and better exploitation of existing satellite and radiosonde data in data assimilation systems.All these uncertainties must be understood in order to make better observations and predictions of climate change.
Current Capabilities for Measuring Water Vapour, Clouds and Aerosols in the UTLS and above
The longest stratospheric measurement record comes from the radiosonde network.Sondes provide high resolution vertical profiles but are sparsely distributed across the globe, and both sonde type and reporting practices differ at different sites.Wang et al. [26] compared the performance of two types of operational radiosonde with a more accurate dew-point hygrometer and found that the radiosondes were insensitive to humidity changes in the upper troposphere, leading to potential errors in the climate record derived from these measurements.Sun et al. [27] also detect a dry bias which increases with altitude, and note the difficulty of collocating sonde profiles with other data sources, in part because of the drift of the balloon as it ascends.
Satellites are now also used to measure atmospheric humidity, providing much better spatial and temporal coverage than sondes.Infrared sounders such as SEVIRI on Meteosat Second Generation and the High-resolution Infrared Radiation Sounder (HIRS) provide information on upper-tropospheric humidity, but these nadir sounders have broad weighting functions that limit their vertical resolution.Deriving profile information from a top-of-atmosphere brightness temperature requires good a priori information on the atmospheric temperature structure, and the largest differences between different observations, models and reanalyses are seen in the upper troposphere and stratosphere [3].
Limb sounders have narrower weighting functions, with a pronounced peak at the altitude corresponding to the tangent point.This improves vertical resolution whilst still offering good coverage, at the expense of complex radiative transfer and difficult retrievals in the lower atmospheric layers due to increased scattering.Several satellites currently make passive limb measurements of water vapour, usually as part of missions measuring a wide range of atmospheric species.The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) on Enivsat, launched in 2002, is a mid-infrared Fourier transform spectrometer which can measure water vapour in the mesosphere, lower thermosphere, stratosphere and upper troposphere, at a vertical resolution of 3 km.The Odin satellite, launched in 2001 has a combined payload of the Sub-Millimeter Radiometer (SMR), and the Optical Spectrograph and Infrared Imaging Spectrum (OSIRIS), and also can be used to derive water vapour at 3 km vertical resolution, although 50% of its operational time is spent making astronomical rather than Earth observations.Higher vertical resolution is achieved by the Microwave Limb Sounder (MLS), launched in 2004 on Aura.It also observes thermal emission in the limb, and has a vertical resolution of 1.5 km at 200 hPa, reducing to 5 km in the mesosphere.The Tropospheric Emission Spectrometer (TES), also on Aura, has both nadir-and limb-viewing modes for water vapour.TES has a high spectral resolution but a relatively small swath (5.3 × 8.5 km) and nadir footprint (0.53 × 5.3 km).In limb mode, TES observes a region with a ground footprint of 26 km × 41.8 km, with a height resolution of 2.3 km and vertical coverage from 0 to 33 km.
Various occultation techniques are also used to measure water vapour.Water vapour profiles have recently been retrieved from SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY) solar occultation data for an altitude range of 15-45 km with a vertical resolution of 2.6 km.However, the spectral resolution of SCIAMACHY is not high enough to resolve water vapour lines, which leads to additional correction steps in the retrieval algorithm [28].The Atmospheric Chemistry Experiment, onboard the Canadian satellite SCISAT-1, carries a Fourier transform spectrometer for solar occultation profiles of water vapour.Vertical resolution is 3 to 4 km and the instrument has been making measurements since 2003.Nassar et al. [29] suggest that retrieved water vapour profiles have low measurement error, and are subject to a random error of less than 2.0% for the stratosphere (with higher values in the troposphere and mesosphere).The Global Ozone Monitoring by Occultation of Stars (GOMOS) uses stellar occultation to retrieve vertical profiles of atmospheric constituents, with a vertical resolution of 2-4 km.The signal-to-noise ratio for water vapour is low, but the retrieval has recently been improved through refined calibration procedures and is expected to improve from the new processor version [30].
The largest impact on numerical weather prediction (NWP) in recent years has come from radio occultation measurements, such as those from the Constellation Observing System for Metereology, Ionosphere and Climate (COSMIC) and the Global Navigation Satellite Systems Receiver for Atmospheric Sounding (GRAS).By assimilating the measured bending angle of atmospheric refraction, the atmospheric temperature, pressure and water vapour content fields can be constrained in NWP models.Sun et al. [27] compare atmospheric profiles from COSMIC and radiosondes, finding dry biases in the latter depending on radiosonde type and time of day.This demonstrates the importance of the radio occultation measurements as a reference for other humidity measurements, however they are only accurate to 0.1 g/kg, and the very low amounts of water vapour in the UTLS require a more sensitive technique.
The majority of spaceborne systems for cloud and aerosol measurements also employ passive instruments.NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) has been spaceborne since 1999, acquiring data in 36 spectral bands.Aerosol optical depth is derived globally, and size distribution can be derived over the oceans, at a spatial resolution of up to 250 m.MODIS wavelengths also detect clouds, and the discrimination of cloud from aerosol plumes is challenging.The Multi-angle Imaging Spectroradiometer (MISR; 1999-present) takes images at nine viewing angles simultaneously at visible and near-infrared frequencies, which allows the discrimination of surface and aerosol signatures.However it has a narrow swath and a long repeat cycle of 16 days.The Stratospheric Aerosol and Gas Experiment III (SAGE III) employed a UV/visible spectrometer to make measurements of aerosol (up to 40 km) and water vapour (up to 50 km), along with cloud detection (6-30 km), at 1 km vertical resolution.It was launched in 2002 but ceased operating in 2006.
Active monitoring of clouds and aerosol became possible with the launch in 2006 of the cloud-profiling radar and LiDAR on Cloudsat and CALIPSO (Cloud-Aerosol LiDAR and Infrared Pathfinder Satellite Observations) respectively.The two spacecraft fly in close formation so that the observations from the two instruments are near-simultaneous, and together the instruments provide much greater information on the vertical structure of clouds.However, they only measure a narrow swath.The Earth Clouds, Aerosol and Radiation Explorer (EarthCARE), currently in development, will combine a cloud-profiling radar and a backscatter LiDAR on one spacecraft, together with a multi-spectral imager and broadband radiometer.Exploiting these measurements simultaneously will allow more sophisticated process studies of aerosol-cloud-radiation interactions.
Finally, Process Exploration Through Measurements of Infrared and Millimeter Wave Emitted Radiation (PREMIER) is a candidate mission for ESA's seventh Earth Explorer mission.If selected, this mission would employ passive limb-sounding to determine concentrations and dynamics of many atmospheric constituents, including water vapour.The altitude range for water vapour measurements is 6-55 km, with a target accuracy of 5% and a vertical resolution of 2 km.Whichever mission of the three candidates is picked, the earliest launch would be 2016.
Observational Requirements for a New Mission Concept
The main purpose of the mission concept presented in this paper is to provide new observational data on upper air quantities for climate studies, focussing on water vapour.The Database of Observational Requirements formulated by the World Meteorological Organisation (WMO) [31] gives different threshold and target values on vertical resolution needed for water vapour measurements in the different atmospheric layers, depending on the application.For Nowcasting and Very Short Range Forecasting the threshold vertical resolution in the higher troposphere is 3 km and the target resolution is 1 km.However for Global Climate Observing System (GCOS) applications, threshold and target vertical resolution are considerably lower at 2.0 and 0.1 km respectively.Such high resolution in the vertical enables the detection and proper sampling of small scale tropopause folds, which may have extensions as low as 1 km to 4 km [32], as well as resolving the strong gradients in water vapour around (and defining) the tropopause.The data would also be of comparable resolution to high resolution GCMs in this atmospheric region [8].
Horizontal resolution requirements can be lower since stratospheric water vapour features are considerably slower-varying on a horizontal spatial scale than on the vertical scale.Horizontal resolution requirements have been discussed in the WALES proposal [4] to be about 150-200 km for the mid-troposphere and the UTLS regions respectively to deliver best input to atmospheric models.Finer scale features such as tropopause folds exist in the UTLS region, with extensions of about 100 km [32].
A system aiming at horizontal detection of such folds requires a target resolution rather higher, of the order of 10-50 km.
To be able to resolve the atmosphere from the UTLS region upwards, the target lowest measurement height is 8 km.To detect polar mesospheric clouds (noctilucent clouds) we specify an upper threshold for the altitude of 100 km.
Water vapour concentrations in the atmosphere are highly variable.To cover the atmosphere from the UTLS down to the surface it has been suggested that a dynamic range of 0.01-15 g/kg is necessary [4].To cover also water vapour in the stratosphere, considerably higher sensitivity is required, since concentrations are generally lower at around a few ppm (i.e., 0.001 g/kg) or less.
To properly characterise the variability of upper tropospheric and stratospheric water vapour, it is desirable to sample both sub-seasonally and sub-diurnally, over a number of years.Global coverage is also required: good coverage over the poles would allow the monitoring of polar stratospheric and mesospheric clouds, while the tropics require coverage for understanding the role of deep convection in stratospheric variability, and tropopause folds occur at midlatitudes.A mission lifetime of at least 4 years is recommended as providing the minimum number of repeat observations for attempting a climatology of UTLS water vapour.Such water vapour observations alone would be beneficial to our scientific understanding of the UTLS and above, but it would be even more useful to monitor clouds and aerosols from the same observational platform.Passive imaging radiometers such as MODIS provide this capability; a resolution of 200 m × 200 m and swath of 400 km are suggested as providing suitable complementary information to the water vapour observations, and continuity with existing similar instruments.
A New Measurement Technique in Space: Active Limb Sounding
The observational requirements for upper air water vapour, as described above, are challenging to meet.The measurement technique needs to combine very high vertical resolution with a high signal-to-noise ratio for the detection of very low water vapour concentrations.The technique we propose here is an active limb sounding system, based on Integrated Path Differential Absorption (IPDA) LiDAR.
Limb sounding for water vapour is an attractive option because the measurements at high altitudes are not contaminated by the signal from the humid, optically thick troposphere.Current limb sounders (as described above) have resolutions on the order of km in the upper troposphere and above, which is not high enough to resolve the strong vertical gradients in water vapour around the tropopause.Solar occultation techniques, with larger radiative flux densities to analyze, have limited measurement opportunities per orbit.By moving to an active technique, not only can this resolution be improved but the threshold of detection should be lowered, leading to greater sensitivity and a better characterization of the low concentrations of water vapour in the UTLS and above.
The concept of hard target Integrated Path Differential Absorption (IPDA) LiDAR has been around for several decades.Recent studies into spaceborne instruments for trace greenhouse gas detection propose solid ground and cloud surfaces for the reflection of a nadir-emitted pulse [33].Along with the difficulty of estimating the differential reflectivity over heterogeneous surface types, the signal-to-noise ratio (SNR) of such an instrument will be severely affected by overall low ratios of backscattered radiation and by speckle noise from interference originating at an uneven ground surface.The technique itself is also ill-suited for exploiting the ranging or profiling advantages of a standard LiDAR system, especially above the middle troposphere.
Differential Optical Absorption Spectroscopy (DOAS) is one of the most extensively applied methods for measuring trace species in the open atmosphere [34], producing total path measurements of chemical components from simultaneous observations of spectrally resolved electromagnetic radiation.The technique for the retrieval of multiple species path-integrated number densities has been pioneered amongst others by Platt et al. [35] in the form of long-path DOAS measurements with an active continuous light source, though common passive configurations use natural light as the source, either from direct solar or stellar radiation or from diffuse sunlight.Active long path systems have been proposed both as monostatic and bistatic designs.Monostatic systems use a collocated transmitter and receiver and rely on arrays of retroreflectors from which light is picked up by a telescope, while a bistatic system has a separate transmitter and receiver.Optimal design considerations are discussed in [36].
Since DOAS relies on continuous spectra, laser sources are not common in active set-ups.Nevertheless, an obvious advantage of a laser source is the concentration of the radiant energy within a narrow spectral band and limited beam divergence, albeit at the expense of spectral coverage.A further advantage of targeting only a few selected wavelengths is the possibility to implement significantly higher gain detectors with high quantum efficiency and low noise levels.
For accurate measurements of water vapour, we propose a differential absorption system utilizing elastic backscatter and integrated path LiDAR, centred on wavelength λ on , an isolated vapour absorption feature in the near infrared, compared to a nearby off-peak λ off .Integrated absorption of λ on over the limb path offers a much higher sensitivity (at the expense of spatial resolution) and increases the dynamic range of vapour detection.The selection of the wavelength pairs remains a trade-off between higher accuracy and sensitivity with larger differential absorption cross sections, minimization of systematic errors through smaller wavelength differences and avoidance of absorption bands corresponding to other gases, and finally depends on the frequencies that can be generated by available laser sources [37].
Extinction measurements from such a system would also allow the detection of very low concentrations of stratospheric aerosol, which are difficult to obtain from a conventional nadir backscatter LiDAR measurement alone, and is therefore outside the capability of the CALIPSO mission [38].Rayleigh and Mie scattering maintain the polarization ratio within a 2%-3% limit, whilst non-spherical particles or multiple scattering may induce some degree of depolarization [39].This ratio could be used to determine particle shape, phase or multiple scattering, with particularly high values in the case of cirrus clouds with small optical depths, due to particles such as crystallites or hexagonal ice crystals [40], even if it may only be directly relevant for the nadir backscatter signal.
A similar novel active limb-sounding technique has been proposed to ESA [41] as part of the ACCURATE mission (Atmospheric Climate and Chemistry in the UTLS Region and climate Trends Explorer).The mission aims to retrieve atmospheric constituents and line-of-sight winds from occultation measurements between two low earth orbit (LEO) platforms using active microwave and IR signals.In considering the original ACCURATE proposal, the ESA committee noted the mission's potential for high vertical accuracy, global coverage and good absolute calibration.The Alpbach summer school team arrived at the active limb sounding concept independently, but it is these characteristics that we also wish to exploit, with a focus on water vapour in the UTLS and above.
Instruments
We propose a novel approach which combines in a single instrument a water vapour Differential Absorption LiDAR (DIAL) in nadir-viewing mode [4,42] with a monostatic IPDA LiDAR in limb sounding by occultation geometry ("IPDALLS").Looking through the limb of the atmosphere, the IPDALLS system will sample the atmosphere at high vertical resolution with a long integration path.Several wavelength pairs with varying online absorption cross sections will be used to increase the observations' dynamic range within the UTLS and above.Measurements in this region were beyond the scope of the WALES candidate explorer mission [4], hence this new concept would lead to a more complete and accurate characterization of the water vapour profile.A secondary payload of a medium spatial resolution multispectral radiometric imager would allow wide-swath cloud and aerosol imaging.
The IPDALLS system could be implemented with a bistatic instrument, where transmitter and receiver are on separate platforms.This is the configuration proposed for the ACCURATE mission.
However, since two satellites in prograde and retrograde orbits within the same orbital plane could generate at the most four occultations per revolution, the basis of our mission concept is rather a monostatic transmitter-receiver spacecraft flown in formation with multiple spaceborne retroreflectors.An image of the mission concept is shown in Figure 1.
Figure 1.Illustration of the mission concept.The primary spacecraft is flown in an orbit which is counter-rotating with respect to a constellation of retroreflectors.The primary spacecraft carries a water vapour Differential Absorption LiDAR in nadir-viewing mode, and a monostatic IPDA LiDAR for limb sounding by occultation geometry (IPDALLS), using the retroreflectors as hard targets.The primary spacecraft also carries an imaging radiometer optimised for wider-swath cloud and aerosol imaging.
The nadir LiDAR will provide range-resolved vertical profiles of water vapour and aerosols, as proposed for the WALES mission, albeit at slightly relaxed requirements, as the signal will not have to penetrate into the optically thick lower troposphere.The same (or a redundant) transmitting and the same receiving units are used for the limb IPDA LiDAR (see Sections 3.3 and 5.1 for proposed operation and instrument design, respectively).Even if the IPDALLS system, by its implementation, will in principle provide range-gated water vapour and aerosol data in addition to the integrated values, in practice, we estimate that any molecular backscatter signals will probably remain undetectable over the large limb distances involved.
Synergistic data processing-nadir-limb matching-will allow for the simultaneous exploitation of the high along-track resolution of the range-resolved nadir signal with the enhanced sensitivity of a path-integrated measurement from the IPDALLS system.The latter shall also improve the relatively poor vertical resolution that could be achieved with a nadir DIAL signal alone when operating in upper atmospheric regions characterized by low backscatter coefficients.The proposed resolution and sensitivity improvement through the use of multiple observation methods is conceptually similar to panchromatic sharpening of multispectral imagery.
By design, the multiple-wavelength bidirectional LiDAR instrument, core element of the mission's payload, will emit pulses at water-vapour absorption line-specific wavelengths for the DIAL retrieval.According to studies summarized in [42], three distinct water vapour absorption bands are required for probing the entire atmospheric column from the boundary layer up to the lowermost stratosphere (see Section 5), in addition to a reference offline wavelength in the wing of the absorption spectrum.The four wavelengths used in the WALES proposal were λ on1 = 935.6845nm, λ on2 = 935.5611nm, λ on3 = 935.9065nm, λ off = 935.4122nm.In this preliminary study, we find that the same number of wavelengths is required for long path integrated limb sounding from the upper troposphere upwards, and we retain these four wavelengths as a starting point for the performance assessment, even if a posterior optimization study may well lead to a more favourable selection.See Section 4 for more detailed discussion and calculations.
The pump laser's first (1,064 nm) and second (532 nm) harmonic wavelengths will also be used for retrieval of atmospheric aerosol loading, via backscatter and extinction measurements from the elastic backscatter and retroreflected signals at these two widely spaced laser wavelengths.A measurement of the depolarization ratio will also be made for aerosol characterization.
Inevitably, the presence of clouds will limit the number of IPDALLS measurements, as the technique can only be applied above cloud top height, below which the optical link between the instrument and retroreflector will be lost.However, in its function as an aerosol elastic backscatter LiDAR and due to its bidirectional viewing geometry, in these situations the system will act as a combined range-gated nadir-limb ceilometer, especially for thin high altitude cloud decks with a sufficiently large horizontal spread.It will thus provide information on the vertical extent of high altitude, optically thin translucent clouds, such as (contrail) cirrus, polar stratospheric and noctilucent clouds.Given a typical cirrus cloud thickness on the order of 1-2 km, achieving a high vertical resolution from the combined data is crucial.
A fairly conventional radiometric imager will be used for the retrieval of cloud macro-and microphysical properties at cloud-resolving scale, following the operational method of Rosenfeld and Lensky [43].Clouds are detected using the brightness temperature difference between visible and thermal infrared channels; cirrus ice clouds and water clouds can be distinguished by the brightness temperature difference between particular thermal infrared channels, which can also be used for the retrieval of cloud top temperatures (and heights).
The radiative impact of cirrus clouds is dependent on certain physical cloud parameters: effective particle radius (R e ), ice water path (IWP), the optical depth (τ ), and the vertical position of the cloud, manifested in its cloud top temperature and cloud top height.The radiometer spectral bands have been chosen to be sensitive to these parameters.The visible (VIS) and short-wave infrared (SWIR) bands (0.66 µm and 1.66 µm ) are highly sensitive to R e , IWP and thus τ .The 10.8 µm and the 12.0 µm bands are also sensitive to the same parameters, but need to be combined with additional bands for accurate results: VIS/SWIR bands during daytime and the 3.9 µm mid-infrared (MIR) band during night.The thermal infrared (TIR) bands are sensitive to cloud top temperature and height [44].The five bands specified above are the minimum requirements to recover the desired cloud parameters.Additionally, two SWIR bands, 1.38 µm and 1.24 µm, are incorporated into the radiometer design.The 1.38 µm band enables to detect upper level cirrus clouds, especially over land [45].Furthermore, the ratio of the 1.38 µm band over the 1.24 µm band is very effective in discriminating upper level cirrus clouds from the lower level aerosols and dust [46].The horizontal distribution of column integrated water vapour will be retrieved from a further, dedicated band at 6.3 µm, and will be useful for the context interpretation of the nadir LiDAR data and their cross-validation.
Orbit Specification
The appropriate orbit for this mission concept is a circular low Earth orbit (LEO).Crucial to the implementation is a phased homogeneous constellation of retroreflectors on board multiple microsatellites, counter-rotating with respect to the monostatic active spacecraft, and in the same orbital plane.A single plane is required for the occultation geometry, to achieve collocated nadir and limb measurements and for relaxed requirements to the pointing and tracking system described in Section 5.4.
Relative precession between both orbits must be avoided; for a mission which requires global coverage, this can only be achieved with satellites in a polar orbit.Furthermore, the altitudes of the primary and retroreflector spacecraft should be close, to ensure that the descent of the tangent point throughout the atmosphere is as close to vertical as possible, though a minimum altitude separation of 20 km is recommended to minimize collision risks [47].We suggest that the constellation of retroreflector spacecraft is injected onto the lower orbit to avoid collision with the primary satellite in case of orbit maintenance problems for the microsatellites.
Five retroreflector spacecraft on a 550 km LEO (270 • inclination, 5,738.8s period) in combination with a 582 km LEO (90 • inclination, 5,778.6 s period) for the primary satellite have been identified to provide an optimal configuration.This involved a trade-off study between the number of occultations required, the measurement sequence timing, the amount of atmospheric drag and implications for orbit maintenance, as well as considerations about the nadir LiDAR SNR, which decreases with increasing path length (altitude).The altitude has been set to produce a 10 day return period of the ground track.Active orbit control will guarantee high accuracy inclination maintenance and prevent relative drift of the orbital planes (see Section 5.5 for more details).
Measurement Sequence and Coverage
A typical measurement sequence for the nadir LiDAR and limb sounding observations lasts for a total of 540.6 seconds; this is depicted in Figure 2. The sequence comprises the time for nadir LiDAR observations, tracking and locking of a retroreflector, calibration of the IPDALLS system and finally the limb measurement itself.The sequence is designed so that the nadir and limb measurements are roughly collocated (within 200 km; see Figure 4 and the discussion thereof, below).
In order not to overstretch power and heat evacuation requirements and to maximize the outgoing pulse energy, the nadir and limb instruments are powered sequentially.The sequence begins at time t 0 with the nadir LiDAR operating at a nominal pulse repetition frequency (PRF) of 25 Hz for 324 s, covering a ground track of 2,244 km, the midpoint of which corresponds to the tangent point of the following limb sounding and approximately to the orbital crossing of the primary and secondary spacecraft.At time t 1 a search and lock procedure (described in Section 5.4) is initiated to establish the optical link between the instrument and the retroreflector; this optical communication procedure can take no longer than 122 s.Once the link is established, it will be maintained by an internal tracking mechanism within the IPDALLS instrument.The limb occultation measurement starts at time t 2 with 'free space' calibration measurements taken from a height of 250 km down to 100 km (t 3 ).Scientific data is gathered for 35 s between t 3 and t 4 , from a height of 100 km down to ground level, although in practice the link will be terminated at optically thick cloud top height (estimated between 5 and 15 km according to the latitude and season), and data may not all be downlinked due to communication constraints.During the idle time of 35 s between limb sounding termination and the next collocated nadir sequence, the system will resume and dwell in nadir sounding mode, thereby maintaining a continuous laser operation to increase transmitter lifetime, and providing further science data.with the orbit of the retroreflector spacecraft just inside (RSC; 550 km altitude, traveling anticlockwise).At time t 0 , the primary spacecraft begins making nadir measurements.At time t 1 the nadir measurement ceases, and the primary spacecraft searches for and locks onto the retroreflector.Limb measurements begin at t 2 with calibration, then the collection of scientific data from t 3 to t 4 .There is then some free time before the start of the next measurement sequence as the next retroreflector comes into view (shown in grey).Ray bending due to atmospheric refraction is unaccounted for.
Figure 3(a) shows the simulated coverage for one day of measurements with five retroreflectors, with the measurement tracks coloured according to time of day (UTC).Observations are global, with the largest orbit step or measurement gap of about 2500 km around the equator, and highest sampling density over the poles (although a certain proportion of these will be lost when the instrument is pointing directly at the Sun). Figure 3 Three weeks' sampling produces almost global coverage, with the largest gaps at the equator reduced to around 500 km, and simultaneously the sub-diurnal sampling has been improved.Coverage will always be limited by the narrow ground track of the LiDAR system.In the case of the loss of one retroreflector, global coverage after 21 days is not significantly affected (Figure 3(d)).The largest impact is over the tropics, although gaps are still less than 800 km, but coverage remains excellent at mid-latitudes and over the poles.The wider swath of the multispectral radiometric imager leads to a much swifter global coverage by the passive instrument.A limitation of the prescribed polar orbit is the convoluted sampling of the diurnal and seasonal, or intra-annual, water vapour cycles for any given location.In other words a full 24-h cycle will only be sampled over the course of a year.This is not ideal for a tracer that is highly variable both in space and in time; however, for a given latitude belt, at least two diurnal observations will be available within a narrow zonal region.
Because of the Earth's rotation between times t 0 and t 3 , the measurements taken by the nadir and limb looking LiDARs will not exactly cover the same track.However, the nadir instrument is pointing slightly cross-track or off-nadir to avoid specular reflection, which mitigates this effect.Furthermore, collocation should be defined as a function of the characteristic spatial scales of variability of the observed phenomenon.For tropopause folds and tropical pumping of water vapour into the UTLS, this characteristic scale is of the order of 100 km.Our analysis (shown in Figure 4) produced a typical measurement track offset at the equator at ground level (the worst case) of between 33 and 200 km for an off-nadir angle of 5 • .Figure 4. Representation of the ground-track offset between limb and nadir measurements, for 0 to 20 • latitude.The offset arises because of the rotation of the Earth between nadir and limb measurements.The nadir LiDAR will be inclined to minimise returns from specular reflection; this inclination can be utilised to minimise the offset (5 • inclination shown).
Data Retrieval and Use
Observations from this novel system are designed to deliver higher resolution and higher sensitivity water vapour measurements than currently exist for the UTLS and above.It will help to quantify and understand the differences between current water vapour measurements, and will provide collocated active and passive measurements of water vapour, clouds and aerosol for process studies.Full exploitation of the potential of this combination of measurements will require further research, but much progress has already been made in understanding and assimilating limb observations.Inversion algorithms for the nadir water vapour and aerosol elastic backscatter LiDAR will follow standard procedures adopted for DIAL measurements.The inversion of an integrated path DIAL signal is conceptually rather straightforward, though specific considerations for an occultation geometry will complicate the picture somewhat: chromatic ray bending, refractive dilution and scintillations (see [30] and references therein, for the GOMOS stellar occultation measurements), a number of considerations related to the use of narrow absorption lines (see [48] and references therein, for a water vapour DIAL system), as well as background radiance from sunlight scattering and spontaneous emission (e.g., from auroras).We expect that well-established retrieval approaches for limb sounding systems, such as 'onion peeling' implemented, e.g., for SCIAMACHY solar occultations [28] will be adapted for this system.These algorithms require the use of a reference atmosphere, and the quality of the results will depend on the appropriateness of that reference.However, Noel et al. [28] find that this dependence is greatest at low altitudes and is negligible at an altitude of 35 km.Long path inhomogeneities will need to be deconvolved from the limb measurements.
With a nominal PRF of 25 Hz for the limb sounding, we avoid signal ambiguities at the maximum path length between the instrument and the retroreflector (∼5,500 km).This means that the previous pulse returns before the following is emitted.At a vertical descent rate of the optical link through the atmosphere of approximately 2.8 km/s, the vertical sampling resolution is therefore constrained to roughly 110 m.Single signals may need to be integrated to increase the SNR, which is typically proportional to the square root of the number of shots, and the actual vertical resolution will be degraded accordingly.We expect to find single shot SNRs from the reflected pulse returns large enough to make large averaging sets unnecessary, though this has yet to be confirmed by a detailed end-to-end simulation (see Section 4 for the preliminary calculations).In addition, with every pulse or burst, 4 DIAL wavelengths are emitted, yielding three pairs for averaging, even if two of them will generally be inadequate for the water vapour concentration at a given altitude.
Forward radiative transfer modelling of brightness temperatures and reflectance as a function of cloud parameters across the relevant wavelengths will be used to generate look-up tables for the exploitation of the radiometer data.By matching the observed data to the look-up table values, users can recover the cloud properties.
Benefits of this novel system are likely to come from the improvement of model routines and parameterizations through process studies of clouds and convection.Assimilation of high resolution, accurate water vapour measurements within models would lead to improved water vapour convergence estimates [4].It would also provide a framework for enhanced information extraction and the interpretation of dynamic features of the atmosphere occurring within the UTLS.This is particularly relevant for creating climatologies of the tropopause structure, including the identification of stratosphere-troposphere exchange events and tropopause folds, and to resolve structures and finer-scale fluxes in the UTLS and lower stratosphere, such as the polar vortex or water vapour fluxes above the top of deep convective clouds.Such assimilation systems are likely to operate on finer resolutions than are currently the state-of-the-art, which will make corresponding high resolution measurements ever more necessary.
First Order Estimation of the Link Budget and Measurement Accuracy
Here we present a preliminary feasibility study of the IPDALLS concept with respect to our primary mission objective, i.e., the profiling of water vapour in the UTLS and above, using a model developed for rough systems trade-offs.In due course, detailed instrument performance end-to-end simulations and corresponding sensitivity studies, noise and error evaluations would be required, as outlined, e.g., in [33,[48][49][50][51][52] for similarly operating payloads.For a system relying on wavelength pairs with extremely narrow spectral separation, pressure and Doppler broadening of the absorption lines, determining the Voigt line profile, as well as water vapour self-broadening, will become very relevant.Consequently, absorption cross sections will need to be calculated as a function of pressure and temperature, or altitude (as in Figure 5), and corresponding atmospheric profiles assumed, retrieved or modelled as a preliminary step in a retrieval algorithm.Furthermore, the Doppler shifts induced by the diverging spacecraft, different for the forward and the backward paths, will need to be accounted for in the estimation of the absorption cross sections.For these preliminary calculations, we assumed online wavelengths centred on the absorption peaks of motionless molecules at the tangent point and on the forward path, which have thus been previously generated at a slightly higher frequency by the receding transmitter.The Doppler shift induced by the receding retroreflector, and experienced by the same molecules on the backward path, corresponds to an average beat frequency of −14.9 GHz or a red-shift of 43.6 pm.In general, this makes the atmosphere more transparent on the backward path, even if the shifted λ on2 now resides in the highly sensitive region of a wing's inflection point, which may introduce a potential source of uncertainty.The equations and methodology used to estimate the performance of the IPDALLS concept are described in Appendix, and an abbreviated set of baseline instrument parameters is given in Table 1.[µrad] 40 estimate, larger than twice the laser beam divergence Figure 6(a,b) shows the various transmission profiles in limb sounding geometry for a standard mid-latitude daytime atmospheric profile due to molecular absorption and scattering, respectively.Molecular absorption (in particular due to water vapour) was calculated using the MIPAS Reference Forward Model (RFM, see Appendix) in a standard mid-latitude daytime atmospheric profile, while scattering was calculated with the 1976 US Standard Atmosphere.The differing absorption of the various wavelengths at a given tangent point height is evident, as is the necessity to use several wavelength pairs (with the same reference offline wavelength) to probe different altitudes, since wavelengths with strong absorption coefficients, and hence good sensitivity at heights where water vapour concentrations are low, are completely absorbed in the lower atmospheric layers.Scattering is essentially the same at the on-and offline wavelengths, which is one advantage of this differential absorption technique.A retrieval of atmospheric aerosol, one of the secondary mission objectives, would exploit scattering at widely spaced wavelengths.Figure 6(b) shows that below 10 km, very little signal is received from the 532 nm harmonic when considering molecular scattering alone (i.e., before considering the effect of aerosol), so synergistic limb-nadir aerosol retrieval would likely be constrained to the stratosphere, where ozone absorption will need to be accounted for.Also plotted is the refractive dilution factor (see, e.g., Dalaudier et al. [54]), which becomes increasingly relevant for signal attenuation within the UTLS and below.The kink at around 10 km is due to the strong refractive index change at the level of the tropopause.The irregularities and increase of the factor below about 5 km are related to a refractivity decrease in the moist lower troposphere, due to high water vapour contents, and this region is excluded from our performance simulations.
A simplified power budget is used to estimate the returned signal energy for the limb sounding measurements, and corresponding profiles are shown in Figure 6(c).Losses are assumed to be mainly due to geometric beam broadening, molecular absorption and scattering, retroreflector and receiver efficiency, continuous as well as turbulent refractive effects and aerosol extinction, the last two of which we have not considered in this study.In practice, many further noise and error sources have to be considered, and a good account of key issues affecting the SNR of a nadir IPDA LiDAR is given in [33].In this paper, we will for simplicity only consider the carrier-to-noise ratio (CNR) as a means to estimate measurement sensitivity with respect to the water vapour optical depth and expected random errors.
To achieve a high CNR, we need to maximize the light throughput of the transmitted beam, by using a high power source with small beam divergence.We simultaneously need to minimize scattered light, by using a high focal number telescope to reduce the field-of-view (FOV).The solar background radiance, the standard limb passive DOAS signal, will be considerable for any daytime measurements in the near-infrared and at visible wavelengths.If scattered sunlight were on the same order of magnitude as the received laser signal, this would considerably limit our observation capabilities.Sugimoto et al. [55], on the other hand, report no significant influence of sunlight in laser ground-space absorption measurements in IR wavelengths.Exposure to direct sunlight has not been considered, since it is likely to damage the detector and needs to be avoided.
Figure 7(a) shows the contribution of the various sources of noise to the total detector noise current for the on-and offline wavelengths: signal shot noise, Rayleigh-scattered solar background in the baseline case of 0 • azimuth and 60 • zenith solar angles with respect to the occultation plane and detector dark current noise from surface and bulk origins.The maximum signal photocurrent of 6 mA at the top of the atmosphere (TOA) falls well within the maximum peak rating of the detector (Table 1).
Figure 7(b) shows the CNR with altitude: the potential of the IPDALLS technique is evident from the very high CNRs achieved throughout the stratosphere and into the tropopause layer, with TOA peak ratios on the order of 700.Similar values have recently been reported for a comparable mission concept described in [41].The kink around the tropopause originates from the refractive dilution.
Above roughly 25 km, where atmospheric transmission is high, the CNR for all wavelengths is clearly signal noise power limited (see [56]).Consequently, few improvements can be expected from reducing detector noise or telescope FOV, and the stringent requirements with respect to the latter could be relaxed, making search and tracking easier.The same holds for the lower atmospheric layers, where the signal shot noise remains consistently larger than the background and detector noise at the online wavelength corresponding to the relevant height range, although scintillation variance is likely to lie above the shot noise.Equally, the range of possible occultations could be increased to include configurations with higher solar zenith angles than the baseline.Note however that neither the background from auroras, stellar radiation and non-Rayleigh scattering, nor interference speckle have been considered, and that the actual received signal power and shot noise are likely to be lower due to larger optical depths in the real atmosphere and less favourable parameters in the real system.Also, the full assembly of detector and amplification circuit would need to be considered to evaluate the real optical receiver noise current.(c) Relative random error of the water vapour two-way path-averaged optical depth for the various wavelength pairs.The coloured regions with the solid vertical lines correspond to the optimal measurement ranges (using geometric means) for each wavelength as defined in Figure 6. Figure 7(c) shows the relative random error at maximum sampling resolution for single shot measurements.For each wavelength pair, the random error increases at low altitude because of a weak CNR and at high altitude because of decreasing differential absorption.Over the entire measurement range defined in the mission objectives, the modelled random error on the two-way path-averaged water vapour optical depth varies between 0.2% and 3%, with a significant contribution of refractive dilution to the error within the UTLS.Optimizing the choice of wavelengths may still improve these results and narrow the current gaps between the optimal retrieval ranges.Since the model atmosphere used to simulate transmission after molecular absorption held about 6 ppmv of water vapour towards the upper altitudes of 45 to 50 km (not shown), we hypothesize that significantly lower water vapour concentrations can be detected at lower heights where the effective absorption path increases.However, more advanced modelling and system definitions would be required to translate the modelled random error into errors on mass mixing ratios and to account for systematic errors and additional factors influencing the random error.
Potential Performance Limitations
The results presented above have been obtained without considering the turbulent nature of the atmosphere and related multiple diffraction effects, which arise in addition to the continuous refractive dilution and chromatic refraction.Random anisotropic and isotropic irregularities in upper air density, originating from internal gravity waves, wave breaking, wind shear and other instabilities, result in inhomogeneity of the refractive index which leads to temporal and spatial fluctuations in the recorded intensity of radiant energy: scintillations.Scintillations have been successfully exploited in stellar occultation for gathering data on atmospheric turbulent dynamics (e.g., [57][58][59]), however they severely affect the absorption measurements of the GOMOS instrument (e.g., [60]), the most relevant comparable payload on orbit, and may be even more detrimental for laser (coherent) radiative transfer due to the generation of speckle patterns.
The only analogue to our proposed measurement technique is an inter-orbit optical communications experiment between a geostationary and a LEO platform, which maintained a narrow-beam link through the atmosphere during an occultation event [61,62].In this case, the combined action of a smaller continuous wave laser divergence and the very long distance to the geostationary orbit made the technique more susceptible to angular beam deflections (wander) than the one described in this paper.Both terminals experienced strong fluctuations of the received power, a non-monotonic decrease of the normalized intensity log-normal probability density mode from about 50-40 km downwards, first detector saturation due to refractive lens effects at about 35 km, and a termination of the optical link in the best case at 15 km.
For GOMOS retrieval corrections [60], this steady decline of the average power has been factored into the total atmospheric transmission as an altitude-dependent transmittance, in which refractive dilution is modulated by scintillation estimated from measured intensity fluctuations.The average decrease in signal strength is simultaneously accompanied by temporal amplitude fluctuations due to scintillations which often exceed the GOMOS instrumental noise, hence impacting the SNR and therefore the measurement error in a twofold way.According to the available scintillation measurements, the scintillation variance is saturated below 25-30 km at a root-mean-square level of about 100% of the mean signal.Such a level of scintillation variance within the UTLS can also be expected for the IPDALLS principle.
Even if two spectrally adjacent channels are affected by turbulence in virtually the same way, this may not improve the single shot SNR in the individual channels, and scintillations will still degrade the measurement accuracy.Since wavelength separation for water vapour is very small (less than 0.06%), the light at the different wavelengths travels essentially through the same air density irregularities, if emitted simultaneously.In combination with short laser pulses, this allows us to expect that the impact of scintillations on differential absorption in water vapour channels is very small.According to simulations for ACCURATE [63,64], we can expect the root-mean-square error in differential transmission due to scintillations to remain less than 1% down to 5-10 km for the proposed system.
On the other hand, MacKerrow et al. [65] point out that the correlation between multi-chromatic signal fluctuations is minimal when speckle dominates over other sources of noise, the speckle pattern between different wavelengths itself is decorrelated, and the mean number of integrated speckles per pulse is small, which can be expected in classic hard target low-divergence LiDAR systems.Pure retroreflection speckle has been observed during laser ranging due to the use of reflector arrays [66], but avoided by design for absorption measurements through the use of a single cube-corner retroreflector [55], such as proposed in our preliminary design.Turbulent refraction will also degenerate an initially Gaussian beam profile into spatial intensity fluctuations in the incident wave plane, characterized by a spatial correlation scale, and leading to further potential signal loss, even if MacKerrow et al. [65] are referring to interference generated during the reflection of coherent light off a rough Lambertian target.This is of less concern to the technique described in this paper, and results off a small (essentially point) retroreflector showed a much higher bichromatic cross-correlation (0.87), albeit with relatively high variance [67].For a retroreflector and a receiving telescope of finite apertures, the possibility of integrating over such a speckle structure or individual cells will need to be accounted for [66].For the total power not to be fluctuating, the aperture sizes would need to be larger than the spatial correlation scale [68], which is not satisfied with the current design specifications.In particular, if the radius of a single (point) retroreflector is smaller than the intensity spatial correlation scale, fluctuations in the reflected flux will not be averaged by a receiving aperture of arbitrary size, though a larger telescope diameter would still reduce the signal variance.
Ehret et al. [33] show for a rough target IPDA LiDAR that speckle has a large potential to severely degrade the final SNR, and all of the above will need to be considered in a much more detailed analysis of the expected performance of our system, and validated experimentally.We can already note, however, that we expect strong constraints on the optimistic results presented in the previous section.The GOMOS transmission spectra retrievals are corrected for by anisotropic scintillation measurements from the fast photometer [60]; we expect to realize similar corrections by introducing scintillation quantification using the continuous-wave high-divergence IR beacon laser potentially required by the PAT system (see Figure 8), which would also provide the potential for gathering further science data.
Payload Design: IPDALLS System and nadir LiDAR
Based on the aforementioned measurement configuration, the mission's primary payload comprises a multi-wavelength bidirectional transmitting system, receiving optics, a spectral separation and detection unit, a transient recorder data acquisition unit, a control unit and a scanning and tracking facility based on novel (albeit proven) optical communication technology.A simplified conceptual block diagram (Figure 8) illustrates the payload subdivision into nadir and limb pointing instruments.Emitted beams are in red; optical path of incoming radiation is in blue.Redirection of the outgoing beams into either limb or nadir sounding geometry in case of breakdown of one of the redundant transmitter units is drawn as rotatable unit switching mirrors (US).Sampling of the emitted pulses is via very low transmission beam splitters (BS).Limb telescope field-of-view scanning can be performed in pitch and in yaw using the flat scanning mirror (S) and the rotating drum (D), respectively.In order to maintain the very narrow divergence of the outgoing limb beam with a realistic beam quality parameter and limited diffraction, pulses are routed through the main telescope for beam expansion.Switching between transmission and reception is conceptually performed by a fast mirror galvanometer (GM).Planar scanning of the outgoing limb beam within the telescopes FOV is performed using a fine-pointing mechanism (FPM).Detectors can be shielded from direct sunlight exposure using the field stop and shutter (FS).A fraction of the returned signal (above 1,000 nm) is directed onto a planar acquisition and tracking (AT) CCD through the PAT beam splitter (PBS), to facilitate retroreflector locking before the measurement sequence.The continuous-wave beacon laser beam with higher divergence than the measurement beam, potentially necessary for the PAT system and turbulence estimation, can be steered using a piezoelectrically-controlled mirror (P), and is not further described in this paper.For further details on PAT system layouts for optical communication, we refer to [70].
The transmitter constitutes a critical driver of the overall mission concept.Since the concept draws extensively from the heritage of the WALES mission proposal, and because of necessary further industrial feasibility studies, a detailed description of the laser source is beyond the scope of this study.A comprehensive account on water vapour differential absorption LiDAR transmitter choices can be found in [69].In addition, retrieving profiles of atmospheric aerosol size distribution, backscatter and extinction coefficients from elastic backscatter LiDAR signals generally relies on at least two widely spaced laser wavelengths and their polarisation.
DLR's airborne WALES precursor described in [42] uses two Nd:YAG lasers in master oscillator/power amplifier configuration.The radiation of the WALES pump lasers is frequency doubled by a second-harmonic generating crystal (532 nm) and then converted to values in the vicinity of 935 nm by an Optical Parameter Oscillator (OPO).The output of the OPO is switched between two wavelengths, using calibrating seed laser diodes that can be tuned over a range of about 1 nm via temperature or current control.In order to provide the four wavelengths that have been used in our preliminary performance studies, two identical chains of lasers and non-linear conversion stages would need to be synchronized in a temporally interleaved fashion, at least during nadir operation.
Industrial research into most mature transmitter options for the spaceborne WALES ESA candidate mission converged however on a high-energy Ti:Sapphire oscillator in a ring cavity configuration, pumped by a frequency-doubled Nd:YAG laser and injected by four tunable seed lasers to set the frequencies [4].
The stringent requirements on frequency stabilisation for water vapour detection, particularly relevant in the stratosphere where absorptions lines are very narrow, can be met by the use of seed lasers used in conjunction with a multi-pass water vapour absorption cell (for reference) and wavemeters.Wirth et al. [42] gives a good overview of techniques implemented for the monitoring and diagnostics of the source's very high frequency stability and spectral purity (typically >99.9%).A particular challenge of active laser sounding in limb geometry with retroreflection will arise from spacecraft motion-induced Doppler shifts.Doppler shifts can be precisely modelled from orbit geometry and are expected to vary by less than 1 pm over the sounding altitudes range.However, absolute wavelength shifts on the order of 43 pm for diverging spacecraft and extremely narrow water vapour absorption peaks imply that emitted online sounding wavelengths will need to be generated and calibrated outside the actual absorption peaks.Furthermore, wavelengths will be shifted twice, on the forward as well as on the backward path, implying that peak absorption can only occur over one of them and effectively reducing the lower detection limit.
The transmitter's heat-generating elements are cooled by a heat pipe assembly linked to the primary spacecraft's temperature control subsystem.Temperature control should provide a stable regime at an equilibrium temperature to minimize thermal stress on the transmitter.As mentioned above, it may be necessary to operate a single transmitter unit continuously and to switch its output between the nadir and limb systems.Switching must be designed to avoid single failure points and to guarantee the maintenance of accurate optical alignments over the mission lifetime.It may also be necessary to implement fully independent redundant transmitter units as back-up in case of premature failure.The use of redundant units must be traded-off against thermal requirements and overall weight and size.
The receiving optics consist of a separate, slightly cross-track off-nadir telescope and a limb telescope.Our preliminary design calculations were based on limb measurements with a 0.5 m diameter telescope, while the required nadir optics were specified with a larger 1 m diameter main mirror, in order to collect sufficient backscatter radiance from the lowermost atmospheric region determined by the mission objectives.
A fine-pointing mechanism within the limb optical path is used for scanning the laser beam within the telescope's FOV.Before beam collimation of the limb telescope signal, a small proportion of the received power (above 1,000 nm) is projected onto an acquisition and tracking CCD in the image plane, using a beam splitter.This image is used to localize the retroreflecting spacecraft within the telescope's FOV for a faster and more efficient searching and centralizing procedure.Aside from this, the same spectral separation and detection unit can be used for both nadir and limb instruments (see Figure 8); it may however be preferable to use two different units optimized for their respective sounding principle, providing redundancy and/or limiting signal losses due to beam combining.Since the four water-vapour wavelengths are sent off sequentially, one single gated detector could in theory be used for all of them (see [42]).In practice, the optimal pulse separation time (order of 100 µs) will be much smaller than the maximum return time for spaceborne nadir (order of <4 ms) and limb (order of <40 ms) sounding, which makes wavelength separation or demultiplexing mandatory.
In [4], wavelength separation through Fabry-Pérot etalons tuned to the respective frequencies was proposed, but other spectral separation can be envisaged as well.Individual detectors will be shielded from solar background noise via narrow-band (nominally 1 nm) interferometric filters.Dielectric beam splitters will be used to separate the 1,064 nm first-and the 532 nm second-harmonic channels, for which the depolarization ratio will be analyzed using a polarizing assembly.Low dark current analogue avalanche photodiodes (APDs) are needed for the near-IR channels, whilst photo-multiplier tubes (PMTs) or appropriate (vacuum) APDs will be used for the 532 nm wavelength.Gated diaphragms are used to avoid telescope cross-talk and direct sun exposure.The outgoing pulse is sampled for pulse diagnostic and calibration and transient recorder triggering.Hardware averaging over multiple range-bins or pulses onboard the spacecraft was deemed inappropriate in [4], which will be an important factor to consider when sizing the data downlink.
Payload Design: Radiometer
The radiometer is a fairly conventional design, and consists of two multispectral sensor instruments.The instruments are designed to provide images of the visible and infrared at the chosen wavelengths and bandwidths, with a concept halfway in between AVHRR/3, with which it shares 4 bands, and MODIS, with 7 bands shared.The instrument uses pushbroom scanner technology, in nadir orientation.The 200 m × 200 m resolution requirement is fulfilled by 1 × 2,048 pixel sensors and a nominal dwell period of approximately 26 ms.The detectors are read out five times during the dwell period, to avoid saturation.
Each instrument consists of a single box containing all optical, electronic and mechanical elements.Large radiator areas are available for heat dissipation to provide a stable environment for the VIS/SWIR and the MIR/TIR detectors.The payload will be composed of four units: • The VIS/SWIR instrument, providing data from four spectral channels (VIS: 0.66 µm, SWIR: 1.24 µm, 1.38 µm and 1.66 µm).
• A common optical bench module that interfaces with the platform.The bench is located outside the main platform structure, with the control unit inside.
• The instrument control unit that drives both the VIS/SWIR and the MIR/TIR instruments.
The VIS/SWIR system optical design is shown in Figure 9. Dichroic splitters can reduce the size of the calibration diffuser by providing a common aperture for all four channels.Since a common entrance pupil (aperture) is necessarily remote from the lenses, we should point out that dichroics tend to decrease the optical system size.A rotatable calibration mirror is used in order to provide views both of Earth (bright source views) and cold space.The bright source is provided by the observation of high level opaque clouds, bright deserts or dark oceans for fulfillment of a calibration accuracy better than 5%.Using a range of bright sources allows coverage of the full dynamic range of the instrument.An on-board algorithm provides the calibration target characterization [71].
The MIR/TIR optical system design is shown in Figure 10.Using three separate detectors with independent optics would require three relatively large apertures, and would also have a fairly severe impact on the size of the external calibration hardware.A single aperture and a single detector is therefore preferred, partly for control of system size, but also since a single calibration source and a single detector will likely provide optimum inter-channel relative accuracy.For the TIR, we introduce the dichroics near an intermediate image formed at relatively low aperture, where we have placed the two TIR filters.After exiting the filters, the large intermediate image is directed onto the detector by a relay lens.The MIR/TIR sensors are made of mercury-cadmium-telluride (HgCdTe).The TIR instrument uses a Stirling cycle cooler for maintaining the detectors temperature at 77 K, which provides the required radiometric resolution.A slight modification to the 2,048 × 2,048 pixel Teledyne HAWAII-2RG, used in astrophysics, should suit the instrument requirements.
For MIR/TIR calibration a rotatable mirror is used in order to provide views of the Earth, of cold space and of a bright black body.The last two views provide the two known radiance levels that are required for absolute calibration of all MIR/TIR channels.The mirror is used at the same angle of incidence for cold space and Earth views, so that it has the same emissivity in these two configurations, providing a very good zero radiance reference.Edge structures on the mirror are used to block the cold space aperture during Earth view.The warm black body is a deep-cavity black body, with an emissivity that will always be very close to unity.The black body temperature will be monitored precisely and provide a calibration accuracy better than 1 K.
Spacecraft Design: Mass and Power Budgets
The overall size of the primary spacecraft will be in the range of 3.5 m × 2.5 m × 2.5 m and will be shaped by the limb and nadir telescopes, the optical benches, solar arrays and radiators.It will have a dry mass of roughly 3,000 kg.In this configuration it can be launched from a Soyuz launcher.The payload is estimated to consume an average of 1,500 W, with the spacecraft bus requiring around 700 W. To supply the energy for the subsystems and the payload, and to charge the batteries for the eclipse time, a solar array of 17 m 2 will be required.It will be panel-mounted to accommodate the large area and optimize the Sun incidence angle by two-axis Sun tracking.
To maintain the payload and the primary spacecraft subsystems within their operating temperature ranges, two radiators with an area of 5.8 m 2 each are attached at two sides of the spacecraft to dissipate heat.Louvers ensure that one radiator will always face deep space, while they are closed when exposed to direct sunlight.Heat pipes will transport the heat to the radiators, and to the heaters for eclipse time, and in emergency cases will sustain a minimum temperature for the payload and the battery.
The role of the retroreflector spacecraft is to reflect the laser pulses back to the primary spacecraft.The design performance directly determines the quality of the IPDALLS measurement.Several options for the design of the retroreflector spacecraft were studied.
Initially, completely passive satellites with multiple retroreflective surfaces were considered, similar to those flown in the LAGEOS mission [72].Several issues, such as orbit deterioration, lack of control and inability to de-orbit in a controlled manner, caused this idea to be rejected and an active retroreflector was deemed necessary.Active spherical retroreflector satellites, having multiple retroreflective surfaces but with added orbit control, were then considered.This idea was eventually dismissed because of launcher housing complexity and difficulty to manufacture.In addition, the multiple retroreflective surfaces may have caused speckle noise distortion of the beam signal and point-ahead-angle issues [55].The final retroreflector spacecraft consists of a microsatellite with attitude and orbit control (0.65 m × 0.65 m × 0.8 m) carrying a single corner cube retroreflector with a diameter of 0.5 m and an effective reflective area of 0.2 m 2 .The mass and power budget of both the primary spacecraft and a retroreflector spacecraft are shown in Table 2.
Spacecraft Tracking
Laser ranging from available ground station networks is performed continuously to accurately track the retroreflector spacecraft.This information is used for updating contact schedules with the primary spacecraft.The retroreflector spacecraft will be positioned in the direction of the primary spacecraft using its attitude control system and positioning data.Retroreflector spacecraft tracking, by the primary spacecraft, commences when the nadir measurements have been completed and continue until the retroreflector spacecraft disappears behind the Earth horizon, approximately 215 seconds later (as shown in Figure 2).The primary spacecraft will receive the positional data of the retroreflector spacecraft to within 0.1 m (via the ground station) and GPS data for its own position.
The primary spacecraft performs a search for the retroreflector spacecraft (using its scheduling data) to establish the initial Line of Sight (LOS).The PAT system may incorporate a separate, low-power, high-divergence laser to perform broad search pattern techniques until a signal is received, at which time, measurement can commence using the low-divergence laser.A small proportion of the received power (above 1000 nm) is used to localize the retroreflecting spacecraft within the telescopes FOV for a fast, closed loop efficient tracking procedure.The attitude control system of the primary spacecraft may need to perform coarse movement of the primary spacecraft if a signal is not received but this should be minimised.Fine alignment of the laser is achieved by the movable mirror in the attached assembly, similar to the test bed proposed and tested by Wang et al. [73].During tracking minimal mass is moved, avoiding distortions and the need for excessive attitude control.An added restriction is that both telescopes must avoid direct sun light and sun glare from the surface of the earth.This may be achieved by accurate scheduling data provided to the primary spacecraft.
The design of the spacecraft is such that the nadir telescope always faces perpendicular to Earth.From the initial design process it is proposed that the limb telescope is positioned beside the nadir telescope facing perpendicular to Earth also.Attached to the limb telescope is an assembly consisting of a movable mirror angled to allow signal to and from the retroreflector (Figure 8).The limb telescope has a very narrow FOV, which is required in order to minimize atmospheric background noise.Because of this restriction, the attached mirror assembly has to be able to pitch up and down along the orbital plane, with microrad accuracy, so that the retroreflector remains in the centre of the FOV.This scanning configuration ensures that the primary spacecraft or its limb telescope do not have to move position every time it needs to track a retroreflector spacecraft, since the retroreflector should be centred within the FOV of the limb telescope via the movable mirror in the attached assembly.
An important feature to note about this measurement configuration is that the constellation will never remain in exactly the same orbital plane.Small deviations in inclination from a polar orbit within the specified injection precision may induce a diverging precession of primary and retroreflector spacecraft orbits, imposing additional yaw steering capabilities of the limb FOV.This is shown conceptually in Figure 11(a), for the exaggerated case of a 10 degree inclination difference, and the example of four limb sounding measurement sequences.Required yaw tracking varies across the orbit; in this situation, it is slightly diverging over the equator and strongly converging over the poles.Predicted realistic rates of required yaw tracking, within the specifications of the actual constellation, are shown in Figure 11(b) for a series of measurement sequences with different positions of the tangent point along the orbit.In this example, the primary and retroreflector spacecraft diverged in the Right Ascension of the Ascending Node (RAAN) by only 0.01 degrees, highlighting the importance of precise orbit maintenance.
Orbit Control and Launch Options
Active orbit control and maintenance is performed by each of the satellites to prevent the counter-rotating orbits drifting apart.Since maintaining the inclination is a crucial issue, a precise ∆v budget based on data from [47] has been determined.This includes correction of inclination after orbit injection from the launch vehicle, altitude control by drag make-up, perturbations caused by radiation, and correction maneuvers to maintain the orbit inclination within the required ±0.01 degrees for a nominal mission lifetime of 4 years.The ∆v budget additionally includes a maneuver to de-orbit the satellite after end-of-life.For the retroreflector spacecraft, an additional ∆v item for distribution of the satellites within the orbit is estimated at 11 m/s.This maneuver is performed by altitude change via a Hohman transfer, drift period and return transfer.
Hydrazine monopropellant has been identified as the best choice.It guarantees a high specific impulse of 200 s, high reliability and low complexity in system design.The ∆v requirement analysis resulted in propellant masses of 280 kg for the primary spacecraft and 16 kg for the retroreflector spacecraft.The propulsion subsystem consists of standard components from space heritage manufacturers, including the pressurization system, pressure regulator, pyro-valves, propellant tank, flow control valves and the monopropellant thruster.In the case of the primary spacecraft, four thrusters are employed to achieve redundancy.Dry masses of the propulsion subsystems amount to 32 kg for the primary spacecraft and 2.2 kg for the retroreflector spacecraft.
As the primary spacecraft and the constellation of retroreflector spacecraft are in counter-rotating orbits, two launchers are required, and we propose using Soyuz for the launch of the primary spacecraft and Dnepr for the five retroreflector spacecraft.In order to distribute the constellation of retroflector spacecraft in orbit, several options can be considered: 1. Use of a dispenser to position the constellation, such as the one for SWARM [74].The dispenser would distribute all the retroreflector spacecraft using its own propellant.This option offers an optimal mission lifetime but incurs the cost and design of a dispenser.
2. Use of the two-tier layout in the fairing of Dnepr but no dispenser.In this option, three spacecraft would be placed on the first floor of the fairing, and two spacecraft on the second floor.The spacecraft would have to use their own propellant to achieve the correct distribution.This option shortens the mission lifetime but is cheaper as it does not incur the cost of a dispenser.
3. The final option is a combination of the previous two.Two dispensers, one containing three spacecraft and another containing two spacecraft would be put on the two floors of Dnepr.The dispensers would then distribute the constellation.This option offers an optimal mission lifetime but incurs the cost of two dispensers.The advantage of using this configuration would be a reduced total amount of propellant for the dispensers and hence a potential cost saving for the mission.This option would offer an intermediate solution: same lifetime mission as option 1 but potentially cheaper, a longer mission lifetime than option 2 but more expensive.
Significant Challenges
As with any new measurement technique that has not yet been demonstrated in space, substantial technological and technical development efforts and risks are associated with the mission concept described in this paper.The main challenge is likely to be the design of the transmitter and the resilience of the lasers, as evidenced by existing and planned spaceborne LiDAR missions.Aside from the obvious stress on delicate optical elements and alignment during launch, laser damage to optical coatings is an issue, and the large power consumption and heat dissipation requirements during operations are challenging, especially with respect to the cooling of optical components.The design of a high power pulsed laser system generating the required water vapour wavelengths with extreme frequency accuracy and stability, spectral purity, output energy and temperature control requirements will require significant development and testing.Much research has already been conducted into potential optical layouts for the development of new water vapour DIAL systems in general, and of the WALES airborne platform and spaceborne proposal in particular.Since lasers are prone to failure, transmitter redundancy with limb/nadir switching capabilities would have to be implemented with associated impacts on mass budget, dimensions and costs.
Aiming the laser beam precisely onto a target satellite that passes at a distance of about 5,000 km and at a relative speed of roughly 15 km/s constitutes a further major technical challenge.However, there have been substantial developments in the pointing, acquisition and tracking (PAT) technology in the last 50 years, since the first attempt of space-to-ground laser communications during the Gemini 7 mission in 1965.In the mid 1980s, ESA embarked on the SILEX programme, demonstrating both space-to-ground laser communications between the Artemis satellite and a ground-based tracking station, and inter-satellite laser communication between ARTEMIS and the Earth Observation satellite SPOT-4.SILEX demonstrated for the first time that the stringent PAT requirements associated with the extremely low divergence of optical communication beams (7 µrad in the case of SILEX) can be reliably mastered in space.In 2008, TerraSAR-X and the NFIRE satellites tested data transmission in space at a range of about 5,000 km with a laser beam divergence of approximately 3 µrad [75].Today, DSP-I satellites carry a laser communications package that enable the satellites to relay information to each other with a similar beam divergence.
Although the previous examples did not have to deal with the signal spectral quality requirements that arise for a differential absorption measurement mission, they demonstrate that the extremely demanding PAT requirements associated with optical wavelengths can be reliably mastered, and the knowledge gathered will be extremely valuable for the concept proposed in this paper.
PAT will be further complicated due to the very narrow FOV of the limb telescope required to minimize background radiation.The instrument design proposed herein foresees a flat mirror of the same size as the main telescope mirror to steer the FOV, and to enable the tracking in both pitch (to perform the occultation measurement) and yaw (to compensate for orbit divergence).Even if the respective angular velocities remain small, the mass of the mirror will induce a significant angular momentum which will need to be compensated for by attitude control.This ties in with the very tight requirements on accurate spacecraft positioning, on orbit maintenance and attitude control, for both the primary spacecraft and all the retroreflector spacecraft, as well as the orbit insertion of the entire constellation during two successive launches.
Since the success of this novel measurement technique relies on optimally returned signals, issues associated with the design of the retroreflectors must not be neglected.In particular, incoming radiation must be sent back into exactly the same direction, with minimal reflected beam divergence and suppressed interference.More importantly, atmospheric scintillation effects and speckle may prove to significantly affect the PAT system, and worse, the measurements' SNR.Finally, this discussion would not be complete without acknowledging the challenges already faced by the water vapour backscatter LiDAR proposals that our concept largely draws from.
Concluding Remarks
This paper has proposed a novel and challenging new measurement technique as a means of delivering high quality measurements of UTLS quantities, particularly water vapour which is a crucial atmospheric variable and poorly constrained in climate models.The key features of an observing system for UTLS water vapour are sensitivity to low concentrations and high vertical resolution, both of which an active limb sounding system has the potential to deliver.The technical challenges of such a mission have been acknowledged and are significant, and the concept presented here is a first attempt to address these.
Although the results from this preliminary study are very encouraging (and may prove to be even more so after appropriate wavelengths optimization and systems trade-offs studies), it is difficult to make reliable predictions with respect to the final horizontal and vertical resolutions that can be achieved with the system after all sources of noise, and primarily, atmospheric refraction, ray bending and scintillation, have been taken into consideration.A detailed instrument end-to-end performance simulation would be required, not only to accurately model the forward propagation of the signal and receiver SNR but also to include the performance of potential retrieval algorithms, which will determine the number of shots that have to be averaged to achieve errors and biases within the required limits.The data retrieval simulation will also need to incorporate the synergistic combination of data from both limb and nadir measurements.
In addition to the unprecedented water vapour information, the system generates collocated information on cloud properties and particles at cloud-resolving scales.This will provide valuable insights into aerosol-cloud-climate interaction processes, especially for the poorly understood ice nuclei and cirrus clouds, complementing the planned EarthCARE mission, and building on the CALIPSO-CloudSat formation within the A-train.Adding to already available aerosol LiDARs in orbit, the mission's UTLS and stratospheric aerosol detection capabilities will continue to be useful for the monitoring of potential intrusions of particles into upper air layers.This would be of particular use during volcanic eruptions or the possible deliberate introduction of stratospheric aerosol for solar radiation management as a geoengineering response to climate change, should this idea ever be explored seriously.
paths due to induced Doppler shifts.Losses due to defocusing are taken into account as forward and backward refractive dilution factors φ fwd,bwd [-], which for simplicity and as a first order approximation have been derived following Dalaudier et al. [54], with the same refractive index profile as has been used for estimating the solar background (see below) and atmospheric scale heights defined by the standard temperature profile.Herein, the narrow laser and reflected beam divergences are factored in as a sort of transmitter and receiver gain from geometrical considerations, and (η ref A ref ) is the equivalent of the target surface cross section in the classical radar equation.
Conceptually, the limb column-averaged water vapour mixing ratio can be inferred from the measurement of the forward and backward differential optical depths extracted from full pulse return signals at two wavelengths, as given here for the two-way transmission: The online wavelength (subscript 'on') is centred on a water vapour absorption feature and the offline wavelength (subscript 'off') resides in the wing, but close enough to the online frequency for attenuation by other atmospheric constituents to be essentially the same, whilst ∆ refers to the additional Doppler red-shift experienced on the backward path with respect to the forward one.The expression within the exponential corresponds to twice the optical depth, where σ A [m 2 ] is the water vapour absorption cross section, N [m −3 ] is the vapour's path-averaged concentration or number density, and β [m −1 ] is the total extinction due to absorption and scattering by other atmospheric constituents.
It is then in theory straightforward to extract the average concentration from the logarithm of the ratio between the two signals, provided the absorption cross sections are known and the power ratio of the outgoing pulses is measured.If we assume that differential spectral refraction in the atmosphere and the effect of the time-delay between online and offline pulses are negligible, i.e., that the optical path covered by both wavelengths, as well as their Doppler shifted equivalents, is essentially the same, we can get a simplified expression for the two-way path-averaged differential optical depth for water vapour τ wv , which will still need to be deconvolved to take into account the limb sounding geometry: Power loss due to absorption (P A ) at the four water vapour IPDALLS wavelengths and in limb sounding geometry has been computed as atmospheric transmission (T abs ) values using the MIPAS Reference Forward Model (RFM v4.28) radiative transfer algorithm [76] in conjunction with the HITRAN 2008 database [53] and the FASCODE Model 6 (US Standard Atmosphere plus minor constituents 19DEC99) [77].
Power loss through attenuation by Rayleigh and Mie scattering (P S ) is less strongly wavelength dependent than absorption.In a first approximation, we have ignored aerosol Mie scattering for the estimation of transmission losses at the sounding altitudes of interest.Atmospheric transmission (T sca ) after molecular Rayleigh scattering has been computed for the water vapour offline and the transmitter harmonic wavelengths in the 1976 US Standard Atmosphere.Rayleigh scattering cross sections have been roughly estimated using [78]: with a depolarization factor ρ n [-] for naturally polarized light and a volume polarizability α 0 [cm 3 ] for standard air, giving σ R 532 = 4.937e −31 m 2 , σ R 935 = 5.175e −32 m 2 and σ R 1064 = 3.086e −32 m 2 .The molecular scattering coefficients γ s [m −1 ] follow by multiplying the cross sections by the molecular number densities in the standard atmosphere as a function of height.The atmospheric optical thickness τ s for limb sounding has then been calculated by integrating from the tangent height of the limb sounding h 0 to an altitude of 90 km at which scattering is assumed to be negligible, where R E denotes the radius of the Earth [79].More detailed radiative transfer modelling studies, beyond the scope of this preliminary analysis, would be required to take into account Mie scattering and different atmospheric conditions, as well as a more realistic limb sounding geometry, scintillation, and related effects.Since we hypothesize that the use of optimally designed retroreflectors should limit interference and therefore target-generated speckle noise, and since we do not have a fully designed detection electronic circuit with given elements and measured statistical fluctuations, we limit the modelling of the SNR to the carrier-to-noise ratio CNR [33,80]: SN R ∼ = CN R ∼ = P r M R 0 / B W 2q [(M 2 F R 0 (P r + P bkgrd )) + (I ds + I db M 2 F )], where M [-] is the detector internal gain factor, R 0 [A/W] the detector's unit gain responsivity, B W [Hz] the amplifier bandwidth, q [C] the elementary charge, F [-] the detector's excess noise factor, P bkgrd [W] the background radiance and I ds,b [A] the detector's surface and bulk dark currents, respectively.In this expression, the numerator corresponds to the signal photocurrent [A], and the denominator to the total noise current in a Si avalanche photodiode (Si APD).Apart from external noise, we limited the analysis to detector noise and have disregarded contributions from the electrical amplification process, i.e., amplifier input noise current, input voltage noise current and thermal noise from the feedback resistor.Noise originating from outside the receiver is referred to as external noise (Equation 7, first term in the square brackets).External noise will mainly include signal shot noise (proportional to P r ), as well as high-frequency emission of radiation from outside sources within the receiver FOV, predominantly from auroras (not considered), and sunlight reaching the telescope, either by single and multiple scattering, or by direct irradiation (P bkgrd ).
The scattering of sunlight depends mainly on solar elevation and azimuth, as well as measurement height.Here, a rough estimate for Rayleigh-only scattering of solar radiation into a receiver in occultation geometry is calculated after a simplified model presented in [81].We calculated a profile of directional Rayleigh scattering coefficients γ R (z, λ) [km −1 sr −1 ] in the 1976 US Standard Atmosphere with a MIPAS daytime standard water vapour profile, using the modified-Edlén formula for the refractive index, and under the baseline assumption of a scattering normalized phase function corresponding to a 0 • solar azimuth angle relative to the occultation plane and a 60 • solar zenith angle at the tangent point.Solar spectral radiant flux density at the tangent point I sun [Wm 2 nm −1 ] was assumed to be independent of optical depth, and the effective scattering path length l eff was set to 300 km.The Rayleigh-scattered solar background power P bkgrd collected by the receiver was then estimated as A rec FOV 2 rec B W,optical γ R (z, λ) I sun l eff (8) with nominal channel optical filter bandwidths B W,optical of 1 nm and where FOV rec [mrad] describes the receiver full field-of-view.Noise originating within the detector is referred to as internal noise (Equation ( 7), second term in square brackets), and depends on the detector type.For enhanced sensitivity in the near IR and high internal gain, APDs are preferred over the traditional photomultiplier tubes (PMTs) and can be operated in either their normal linear mode or in photon-counting mode.APDs furthermore have fast rise and fall times and are recommended for very high amplifier bandwidth B W applications.In order to adequately sample the narrow return pulse without excessive smoothing at low bandwidths, we specified the bandwidth through the relationship B W ∼ = k/τ L , with τ L [s] quantifying the emitted laser pulse duration and with a k factor roughly between 3 and 5.For our estimation of the CNR, we have used typical APD specifications provided by Excelitas/PerkinElmer, corresponding to models that have been employed in the GLAS laser system [82] and the WALES airborne campaign [42], notably the C30954E-DTC unit thermoelectrically cooled to −20 • C with a double-stage cooler to reduce the dark current.The excess noise factor, related to internal amplification statistics, has been estimated following [83] with F = k eff M +(1 − k eff ) (2 − 1/M ), where k eff denotes the effective carrier ionization ratio specified as k = 0.02 for a Si high performance reach-through structure [84].Surface and bulk dark currents, measured at room temperature T meas [K], have been extrapolated to their values in a cooled setup (T oper [K]) following the Arrhenius equation, where k B is the Boltzmann constant and where the activation energy E a [J] was set to 0.70 and 0.55 eV for the surface and bulk dark currents, respectively.A simplified expression for the modelled random error of the two-way path-averaged water vapour optical depth can be derived from the first term on the left in Equation ( 4), disregarding the additional differential terms which are based on unknown system parameters.Using standard first order error propagation and setting SNR (on,off) ≡ P r,(on,off) 2 / ∆P 2 r,(on,off) [33], we estimate the variance of the two-way optical depth and then the relative random error
Figure 2 .
Figure 2. Sequence of one set of nadir and limb measurements, with the time for each part of the sequence shown in seconds.The observed region is shown hatched.The outermost orbit is that of the primary spacecraft (PSC; 582 km altitude, shown traveling clockwise),with the orbit of the retroreflector spacecraft just inside (RSC; 550 km altitude, traveling anticlockwise).At time t 0 , the primary spacecraft begins making nadir measurements.At time t 1 the nadir measurement ceases, and the primary spacecraft searches for and locks onto the retroreflector.Limb measurements begin at t 2 with calibration, then the collection of scientific data from t 3 to t 4 .There is then some free time before the start of the next measurement sequence as the next retroreflector comes into view (shown in grey).Ray bending due to atmospheric refraction is unaccounted for.
Figure3(a) shows the simulated coverage for one day of measurements with five retroreflectors, with the measurement tracks coloured according to time of day (UTC).Observations are global, with the largest orbit step or measurement gap of about 2500 km around the equator, and highest sampling density over the poles (although a certain proportion of these will be lost when the instrument is pointing directly at the Sun).Figure3(b) shows the coverage over 10 days, and Figure 3(c) over 21 days of operations.
Figure 3 .
Figure 3. Ground tracks of collocated nadir and limb measurements from a configuration using five retroreflectors, coloured by time of day UTC, for (a) one day, (b) ten days, (c) 21 days and (d) 21 days, with only four retroreflectors.
Figure 5 .
Figure 5. Water vapour molecular absorption cross sections σ as a function of vacuum wavelength λ using the HITRAN 2008 database [53] for sea level (solid line) and an altitude of 20 km (dashed line) according to US Standard Atmosphere conditions.Wavelengths used in this study are indicated by vertical thick lines; the additional thin lines represent the Doppler shifted wavelengths on the backward path.Water vapour absorption of the first and second harmonics generated by the transmitter is negligible.
Figure 6 .Figure 7 .
Figure 6.Outcome of preliminary performance assessment for IPDALLS.Altitude is given for the tangent point of the limb sounding optical path, and values below 5 km have not been computed.(a) Atmospheric transmission due to absorption on the forward path (thick lines) and on the backward path (thin lines).The shaded area shows the range over which the relative error approximately doubles with respect to its minimum at a transmission of 0.33.(b) Atmospheric transmission calculated for Rayleigh scattering alone (black lines) and geometric mean of the forward and backward refractive dilution factor for the offline wavelength (magenta).(c) Radiative signal pulse energy returned to receiver and incident upon detector.
Figure 8 .
Figure 8. Conceptual layout of the main payload.Emitted beams are in red; optical path of incoming radiation is in blue.Redirection of the outgoing beams into either limb or nadir sounding geometry in case of breakdown of one of the redundant transmitter units is drawn as rotatable unit switching mirrors (US).Sampling of the emitted pulses is via very low transmission beam splitters (BS).Limb telescope field-of-view scanning can be performed in pitch and in yaw using the flat scanning mirror (S) and the rotating drum (D), respectively.In order to maintain the very narrow divergence of the outgoing limb beam with a realistic beam quality parameter and limited diffraction, pulses are routed through the main telescope for beam expansion.Switching between transmission and reception is conceptually performed by a fast mirror galvanometer (GM).Planar scanning of the outgoing limb beam within the telescopes FOV is performed using a fine-pointing mechanism (FPM).Detectors can be shielded from direct sunlight exposure using the field stop and shutter (FS).A fraction of the returned signal (above 1,000 nm) is directed onto a planar acquisition and tracking (AT) CCD through the PAT beam splitter (PBS), to facilitate retroreflector locking before the measurement sequence.The continuous-wave beacon laser beam with higher divergence than the measurement beam, potentially necessary for the PAT system and turbulence estimation, can be steered using a piezoelectrically-controlled mirror (P), and is not further described in this paper.For further details on PAT system layouts for optical communication, we refer to[70].
Figure 11 .
Figure 11.(a) Required yaw pointing when primary spacecraft (PSC) and retroreflector (RSC) are in different orbital planes, for the exaggerated case of a 10 degree inclination difference for illustration (the specified tolerance in reality is 0.01 degrees).The PSC is on a prograde orbit with 80 degrees inclination (red), the RSC are on a retrograde polar orbit with 270 degrees inclination (blue).The red and blue lines represent the respective normal vectors.The thick red dot shows the tangent point, purple lines are sounding paths in the ideal configuration of a single orbital plane, black lines represent real occultations.(b) Required yaw tracking during a typical measurement sequence for tangent points at different positions around an orbit (colours), and orbital planes characterized by a 0.01 degrees divergence in RAAN.The shaded area indicates the field-of-view of the limb telescope within which no yaw pointing is required during the occultation.
I
ds,b (T oper ) = I ds,b (T meas ) exp E as,b k B
Table 1 .
Specification of instrument components used in the preliminary performance calculations.
Table 2 .
Mass and power budgets for the primary and the retroreflector spacecraft. | 22,220.8 | 2012-03-27T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Temporal Dynamics and Predictive Modelling of Streamflow and Water Quality Using Advanced Statistical and Ensemble Machine Learning Techniques
: Changes in water quality are closely linked to seasonal fluctuations in streamflow, and a thorough understanding of how these variations interact across different time scales is important for the efficient management of surface water bodies such as rivers, lakes, and reservoirs. The aim of this study is to explore the potential connection between streamflow, rainfall, and water quality and propose an optimised ensemble model for the prediction of a water quality index (WQI). This study modelled the changes in five water quality parameters such as ammonia nitrogen (NH 3 -N), phosphate (PO 43 − ), pH, turbidity, total dissolved solids (TDS), and their associated WQI caused by rainfall and streamflow. The analysis was conducted across three temporal scales, weekly, monthly, and seasonal, using a generalised additive model (GAM) in Toowoomba, Australia. TDS, turbidity, and WQI exhibited a significant nonlinear variation with the changes in streamflow in the weekly and monthly scales. Additionally, pH demonstrated a significant linear to weakly linear correlation with discharge across the three temporal scales. For the accurate prediction of WQI, this study proposed an ensemble model integrating an extreme gradient boosting (XGBoost) and Bayesian optimisation (BO) algorithm, using streamflow as an input across the same temporal scales. The results for the three temporal scales provided the best accuracy of monthly data, based on the accuracy metrics R 2 (0.91), MAE (0.20), and RMSE (0.42). The comparison between the test and predicted data indicated that the prediction model overestimated the WQI at some points. This study highlights the efficiency of integrating rainfall, streamflow, and water quality correlations for WQI prediction, which can provide valuable insights for guiding future water management strategies in similar catchment areas, especially amidst changing climatic conditions.
Introduction
The pollution of rivers and streams resulting from both point and non-point sources is increasing due to the emerging influence of extreme rainfall events and their associated streamflow [1].Surface water quality and ecosystem health are influenced by a complex interplay between factors such as climate variability, hydrological processes, geochemical cycles, and human activities [2][3][4].The rapid growth of population demands increased food production, which consequently disturbs the natural land cover.The increased use of chemical fertilisers causes pollution as they are transported to surface and groundwater systems during extreme rainfall events [5].The change in surface runoff under changing rainfall patterns induces variations in pollutant transfer to water bodies [6,7].Rainfall, associated streamflow, and stream water quality are intimately linked; however, these three aspects are often analysed separately, and a comprehensive assessment of their combined influence on stream water quality has not been explored much [2,8].Stream water quality encompasses three essential aspects of freshwater: its physical state (whether it is frozen or not), temperature, and the concentration of constituents.These factors significantly influence the major processes that regulate stream water quality, such as transport, exchange, storage, and the decomposition of organic matter [9][10][11].The riverine ecosystem related to water quality is subjected to various stresses due to these processes, and streamflow patterns significantly interact with the physical and chemical composition and the state of the water [2,12].However, there has been limited studies of how changes in streamflow patterns, influenced by varying rainfall magnitudes, interact with constituent concentrations in water bodies over different time scales [13].
There are several studies related to process-based hydrological and water quality simulation models, where streamflow variability was considered as a predictor of water quality [14].These include the MIKE 21 and MIKE 31 models [15,16], QUAL models [17], QUAL 2 K model [18], QUASAR model [19,20], SWAT model [21], and IISDHM [22].These models have illuminated an improved understanding of how water quality varies with streamflow variability.However, the simulation accuracy of these models is affected by spatial variations arising from hydrometeorological variability within the catchment scale.Additionally, numerous studies have demonstrated a correlation between land use change and the concentration of water constituents such as dissolved oxygen, total dissolved solids, and nutrients.Specifically, these constituents were detected to be higher in agricultural watersheds compared to forests [23].However, in addition to land use change, stream water quality is also influenced by geology, topography, soil characteristics, and climate variability [2].
Recently, the prediction of a water quality index (WQI) has been advanced through artificial intelligence techniques such as artificial neural network (ANN), support vector machine (SVM), and adaptive neuro fuzzy inference system (ANFIS) [24][25][26].Among them, ANN exhibits a poor prediction accuracy if the range of testing data exceeds the range of training data.Whilst SVM provides high accuracy, it requires determining the optimum values for a large number of parameters.Whereas, ANFIS is a robust algorithm which combines ANN and fuzzy logic for modelling nonlinear, complex, and dynamic systems [27].Despite having its potency, it is computationally complex, and the accuracy is compromised by internal parameters which require a precise weight assignment in fuzzy rule membership [28].On the other hand, hybrid models can effectively recognise the nonlinearity of input and output parameters, demonstrating enhanced robustness against data fluctuations [28].
Extreme gradient boosting (XGBoost) is recognised as a robust ensemble learning algorithm known for its effectiveness in data mining and regression tasks [29].It stands out for its speed, robustness, and ability to deliver precise predictions, as demonstrated in its performance in major data competitions such as Kaggle and Data Castle [30].It has been widely applied in several fields, including predicting concrete electrical resistivity for structural health monitoring and accurately mapping steel properties [31,32].However, its utilisation in predicting streamflow and water quality is limited.We aim to showcase its potential use in predicting WQI.
Convolutional neural network (CNN) is a prevalent topic in deep learning (DL) research which has proven its effectiveness in tasks of computer vision (CV), computer-aided diagnosis (CAD), natural language processing (NLP), and pattern recognition [33].The hyperparameters are crucial parameters to set before model training, governing its learning process and improving performance.An efficient hyperparameter optimisation algorithm can significantly enhance model performance and accuracy [34][35][36].The most widely used hyperparameter optimisation method is grid search, which operates on the principle of exhaustive searching.However, it is limited by the high computational cost associated with exhaustive searching [34].To mitigate this issue, the random search algorithm was introduced; however, previous studies have found it to be unreliable for training com-plex models [36].Recently, Bayesian optimisation (BO) has emerged as a highly effective algorithm for addressing machine learning optimisation problems.The optimisation of artificial neural networks (ANNs), support vector machines (SVMs), and other models can be effectively carried out using this algorithm [30,37].There have been studies on the implementation of XGBoost in hydrology [38]; however, there has been a limited investigation on its application to water quality prediction.Moreover, the estimation and optimisation of hyperparameters is one of the most important steps in the XGBoost model, and applying a BO algorithm can improve the prediction accuracy [39].
In this study, we aimed to investigate the impact of rainfall and streamflow on water quality parameters and associated WQI by applying models using advanced statistical and ensemble machine learning techniques to describe this relationship.In our previous study, five water quality parameters (NH 3 -N, PO 4 3− , pH, turbidity, total dissolved solids) were selected to compute the WQI, and we applied five machine learning and two deep learning algorithms to predict the WQI [40].In addition to this, the trend of rainfall and water quality parameters and the correlation between rainfall and water quality were examined.Building on our previous study, this research proposed a new statistical technique, the generalised additive model (GAM), to identify the correlation between rainfall, streamflow, and water quality parameters.Further, an ensemble learning algorithm (BO-XGBoost) was used to predict the WQI across three temporal scales.The Toowoomba region of Australia consists of three major reservoirs, namely, Cooby, Cressbrook, and Perseverance [40].The Cressbrook Reservoir was selected as the case study area, being one of the three major reservoirs for town water supply in the Toowoomba region of Australia.Due to the availability of streamflow data, this was selected as the representative case study area.The main focuses of this study are as follows:
•
Simulate changes in water quality parameters and the associated WQI with variations in rainfall and streamflow across three temporal scales: weekly, monthly, and seasonal.
•
Propose a novel approach combining XGBoost with a Bayesian optimisation (BO) algorithm to predict the WQI, which considers the influence of streamflow based on the same three temporal scales.The XGBoost was applied to establish the relation between the streamflow and water quality data, and the BO algorithm was used to optimise the XGBoost hyperparameters to improve the accuracy of the prediction model.
Study Area
Cressbrook Reservoir is located approximately 80 km northwest of Brisbane and 55 km northeast of Toowoomba in South East Queensland, Australia (Figure 1).It is situated at an elevation of 280 m Australian Height Datum (AHD) [41] and flows to the Brisbane River [42].The upper Cressbrook Creek comprises Rocky Creek, Bald Hills, Old Woman's Hut, and Crows Nest, while the lower subcatchment includes Cressbrook Reservoir and the major tributaries of Kipper and Oakey Creeks [43].The geological features in the area are diverse and characterised by a basalt covering along the mountain range.Rainfall in the upper subcatchment is relatively higher compared to the middle (above weir and Kipper Creek) and lower catchment (below weir).The influence of the upstream dams and the lower Cressbrook weirs has substantially altered the water flow, leading to a decrease in the number of waterholes.Land use in the upper catchment includes grazing on native vegetation, animal husbandry, national park, vegetation, and irrigated perennial horticulture.In contrast, the lower catchment area is utilised for forestry, grazing on native vegetation, animal husbandry, rural residential areas, and Toogoolawah sewage treatment plant.The water quality in this area is impacted by salinity resulting due to geology and modified water flows, as well as erosion and small scalds [43].The major challenges faced by the Cressbrook catchment include the removal and degradation of riparian vegetation, cattle grazing, deforestation, agricultural activities, and residential development [44].
Water 2024, 16, x FOR PEER REVIEW 4 of 20 modified water flows, as well as erosion and small scalds [43].The major challenges faced by the Cressbrook catchment include the removal and degradation of riparian vegetation, cattle grazing, deforestation, agricultural activities, and residential development [44].
Data
The relationship between water quality and local rainfall and stream discharge was analysed using daily maximum discharge data from site number 143921A (Cressbrook Creek at Rosentretars Crossing, −27.1361°, 152.33°) [45] and daily rainfall data from the Cressbrook Reservoir weather station number 040808 (−27.2641°,152.1959°) [46].The data were obtained from the Queensland Water Monitoring Portal (https://water-monitoring.information.qld.gov.au/,accessed on 6 March 2024) and the Bureau of Meteorology website (https://www.bom.gov.au/,accessed on 7 March 2024).The study period covered the period from 2000 to 2022 due to the availability of water quality data.Weekly water quality data were provided by the Toowoomba Regional Council (TRC) which is responsible for maintaining three water supply reservoirs in the Toowoomba region.The rationale for selecting five water quality parameters (PO4 3− , NH3-N, pH, TDS, and turbidity) and the computation of the WQI were explained in the previous study [40].
Computation of Variation in Water Quality Parameters Using GAM
The variation in the selected water quality parameters in response to changes in rainfall and streamflow was predicted using a generalised additive model (GAM).The GAM integrates elements of both generalised linear models and additive models.It employs an additive link function to define the relationship between the response variable and the nonparametric predictor variables [47].By applying multiple linear smooth functions, the GAM effectively addresses data nonlinearity, prevents overfitting, and obviates the requirement of prior knowledge of specific predictive function forms [48].
The GAM model, which captures the variation in the response variable using the independent variables through smooth functions, can be expressed as follows: where Y is the response variable, and E(Y) is its expected value.The response distribution is not required to be normal; instead, the observations are extracted from a member of the
Data
The relationship between water quality and local rainfall and stream discharge was analysed using daily maximum discharge data from site number 143921A (Cressbrook Creek at Rosentretars Crossing, −27.1361 • , 152.33 • ) [45] and daily rainfall data from the Cressbrook Reservoir weather station number 040808 (−27.2641• , 152.1959 • ) [46].The data were obtained from the Queensland Water Monitoring Portal (https://water-monitoring. information.qld.gov.au/,accessed on 6 March 2024) and the Bureau of Meteorology website (https://www.bom.gov.au/,accessed on 7 March 2024).The study period covered the period from 2000 to 2022 due to the availability of water quality data.Weekly water quality data were provided by the Toowoomba Regional Council (TRC) which is responsible for maintaining three water supply reservoirs in the Toowoomba region.The rationale for selecting five water quality parameters (PO 4 3− , NH 3 -N, pH, TDS, and turbidity) and the computation of the WQI were explained in the previous study [40].
Mathematical Background 2.3.1. Computation of Variation in Water Quality Parameters Using GAM
The variation in the selected water quality parameters in response to changes in rainfall and streamflow was predicted using a generalised additive model (GAM).The GAM integrates elements of both generalised linear models and additive models.It employs an additive link function to define the relationship between the response variable and the nonparametric predictor variables [47].By applying multiple linear smooth functions, the GAM effectively addresses data nonlinearity, prevents overfitting, and obviates the requirement of prior knowledge of specific predictive function forms [48].
The GAM model, which captures the variation in the response variable using the independent variables through smooth functions, can be expressed as follows: where Y is the response variable, and E(Y) is its expected value.The response distribution is not required to be normal; instead, the observations are extracted from a member of the exponential family of the distribution, to be specific Y~EF (µ, φ), where φ is the scale parameter, and µ is the mean.Similarly, g is the smooth monotonic link function (uniform, logarithmic, or transpose) that maps the mean of the distribution function to the linear predictor scale [9].β 0 is the model intercept, and f 1 , f 2 ...f p are the smooth functions of the control variables x 1 , x 2 , ......x p .
Smooth functions provide a flexible nonlinear illustration of covariates on the response variable by determining an appropriate basis which defines the function space to which f belongs.The components of this space are called basis functions.For instance, in the case of a second-order polynomial, the basis functions would be 1, x, and x 2 forming the set The sum of the basis functions and their corresponding regression coefficients β (weights) form the smooth function f, which can be written as where k denotes the fundamental dimension of the smooth function.Each basis function constitutes a column in the model matrix, allowing every smooth function f to be written in a general matrix form f i (x i ) = X i β, where X i is the model matrix [9].Among the various types of smooth functions, a cubic spline smooth function was applied in this study.
The GAM equation in Equation ( 1) was modified in this study to illustrate the variation in water quality.Water quality parameters were predicted as an additive function of two covariates, rainfall and streamflow, and Equation ( 1) was reformed as follows.
Here, Y is the predicted concentration of water quality parameters, and fr and fq are the smooth functions of rainfall and streamflow, respectively.The water quality was modelled and forecasted on three temporal scales: weekly, monthly, and seasonal.A cubic spline smooth function was applied in this study, which is a smooth curve composed of segments of cubic polynomials, seamlessly connected so that the entire spline maintains continuity in both its value and the first two derivatives [9].To fit Equation (3) for the interpretation of changes to water quality parameters, the 'mgcv' package of R software (4.3.2) was used.A generalised cross-validation (GCV) score was considered to check whether the model overfitted the data, and a p value was used to evaluate the model's smoothness.
Prediction of WQI
In this study, an XGBoost algorithm with Bayesian optimisation was applied to predict the water quality index (WQI) using discharge data.This approach aimed to enhance model accuracy and reliability by fine-tuning hyperparameters through a systematic optimisation process.A detailed description of the algorithms is explained in the following sections.
Extreme Gradient Boosting
Extreme gradient boosting (XGBoost) is a popular and powerful machine learning model leveraging a scalable end-to-end tree boosting system capable of capturing a complex nonlinear relationship between a set of predictor and output variables.XGBoost is distinguished by two fundamental optimisation improvements based on a gradientboosting decision tree (GBDT) algorithm.Firstly, regularisation terms are incorporated into XGBoost's objective function which helps to mitigate overfitting.Secondly, XGBoost utilises a second-order Taylor expansion of the target function, enhancing the precision of its loss function [30].
If a dataset is assumed to have n number of variables and is denoted as A = (x i , y i ) where i = 1, 2. .., n and x i represent the input variable, while y i represents the response variable, a tree model based on the general form of classification and regression tree-based algorithm can be generated for the following output: where M is the number of trees trained, W is the space of regression trees, (W = [f (x) = µ lx ]), and f (x) represents a single tree structure.Similarly, µ represents the leaf weight, and l x denotes the leaf node of the xth sample.The objective function in XGBoost incorporates both a regularisation term and a loss function as explained by Chen and Guestrin [49]: where L is the loss function, and Ω is the regularisation term which deals with the objective function depending on the complexity of the model, which can be expressed as follows: where T denotes the number of trees, and γ denotes the penalty coefficient.The objective function in Equation ( 5) can be simplified using a second-order Taylor series expansion as follows [50]: n for the first-order g i and second-order h i can be expressed as follows [30]: The final form of the objective function can be derived as follows [30]: Bayesian Optimisation Hyperparameters directly influence the behaviour of training algorithms and exert a significant impact on machine learning models.Efficient optimisation of hyperparameters is important for enhancing the efficiency of machine learning models [34].Bayesian optimisation provides an effective way to solve computationally expensive functions by identifying optimal points.It integrates previous information about the objective function with sampled points to gather updated information about the function's distribution using a Bayesian formula.By applying this updated information, global optimum values can be assessed [34].Bayesian optimisation involves two major steps: first, the selection of a surrogate model, typically a Gaussian process to incorporate prior information about the objective function; and second, choosing an acquisition function to propose sampling points in the search space [51].
Establishment of the Prediction Model
To predict the WQI, we introduced a XGBoost-BO model, which involved the following steps, and the methodological flow chart is illustrated in Figure 2.
Establishment of the Prediction Model
To predict the WQI, we introduced a XGBoost-BO model, which involved the following steps, and the methodological flow chart is illustrated in Figure 2. The data were preprocessed by selecting the relevant features, specifically 'Discharge' and 'WQI'.The daily time series of discharge data was converted to weekly, monthly, and seasonal values, while weekly WQI data were aggregated into monthly and seasonal values to facilitate prediction across three temporal scales.The selected features were then divided into training and testing sets, where 70% of the data were allocated for training and 30% for testing.This division ensures that the model's performance can be effectively evaluated on unseen data.
(ii) Definition of the objective function for Bayesian optimisation: An objective function was defined for the Bayesian optimisation (BO) to minimise the mean absolute error (MAE) of model predictions.The primary goal was to enhance the model's accuracy for predicting the WQI by minimising the MAE.
(iii) Specification of hyperparameter bounds: Defining the specification of hyperparameter bounds is fundamental for effectively optimising a machine learning model.The common hyperparameters are learning rate, maximum depth of trees, subsample ratio of training instances, column subsample ratio for constructing each tree, and regularisation parameters such as gamma and lambda.The search space for the hyperparameters of the XGBoost model was specified.These included the learning rate (0.001, 0.2), maximum depth of trees (3,10), subsample ratio of training instances (0.8, 1.0), column subsample ratio for constructing each tree (0.3, 1.0), gamma (0, 0.3), and lambda (0, 1.0).
(iv) Implementation of Bayesian optimisation: In the process of optimisation, the BO algorithm was employed, which balances exploration (trying new parameter configurations) and exploitation (using known configurations that are likely to be minimal) to minimise the objective function.It iteratively (i) Data preparation and processing: The data were preprocessed by selecting the relevant features, specifically 'Discharge' and 'WQI'.The daily time series of discharge data was converted to weekly, monthly, and seasonal values, while weekly WQI data were aggregated into monthly and seasonal values to facilitate prediction across three temporal scales.The selected features were then divided into training and testing sets, where 70% of the data were allocated for training and 30% for testing.This division ensures that the model's performance can be effectively evaluated on unseen data.
(ii) Definition of the objective function for Bayesian optimisation: An objective function was defined for the Bayesian optimisation (BO) to minimise the mean absolute error (M A E) of model predictions.The primary goal was to enhance the model's accuracy for predicting the WQI by minimising the M A E.
(iii) Specification of hyperparameter bounds: Defining the specification of hyperparameter bounds is fundamental for effectively optimising a machine learning model.The common hyperparameters are learning rate, maximum depth of trees, subsample ratio of training instances, column subsample ratio for constructing each tree, and regularisation parameters such as gamma and lambda.The search space for the hyperparameters of the XGBoost model was specified.These included the learning rate (0.001, 0.2), maximum depth of trees (3,10), subsample ratio of training instances (0.8, 1.0), column subsample ratio for constructing each tree (0.3, 1.0), gamma (0, 0.3), and lambda (0, 1.0).
(iv) Implementation of Bayesian optimisation: In the process of optimisation, the BO algorithm was employed, which balances exploration (trying new parameter configurations) and exploitation (using known configurations that are likely to be minimal) to minimise the objective function.It iteratively updates the probabilistic model based on observed outcomes using a Gaussian process to suggest new configurations for evaluation, aiming to find the optimal set of hyperparameters.In the proposed model, the acquisition function selected was based on a Gaussian process.The number of initial points was set to 10, and the optimisation process continued for 100 consecutive iterations.
(v) Training the XGBoost model and evaluation:
Using the optimal hyperparameters identified through Bayesian optimisation, an XGBoost model was trained on the training dataset.The trained XGBoost model was then evaluated on a separate test dataset.Three performance metrics such as coefficient of determination R 2 , mean absolute error (MAE), and root mean square error (RMSE) were computed.These metrics provided a comprehensive evaluation of the model's accuracy and reliability.
If y 1 , y 2 ..., y n are observed values and ȳ1 , ŷ2 ..., ŷn are the predicted values, with ȳ representing the mean of y i , the four metrics can be calculated as follows: The coefficient of determination R 2 measured the proportion of variance in the dependent variable that was predicted from the independent variable.The MAE measured the average magnitude of the errors in a set of predictions, and the RMSE quantified the square root of the average of the squared differences between predicted and actual observations.The methodological flow chart of the proposed XGBoost-BO model is illustrated in Figure 2.
(vi) Analysis of predicted results: Firstly, to demonstrate the appropriate fitting of the proposed model, line plots of the actual and predicted data on the training and test sets were generated.This was conducted to indicate the suitability of the proposed model for WQI prediction on different time scales.In addition, to identify patterns in model performance, a comparative analysis of metric values across three time scales was performed.The results from this analysis were then interpreted to determine the overall efficiency and reliability of the model in forecasting WQI, conferring insights into its applicability for practical water quality management.
Descriptive Statistics
Table 1 presents a comprehensive summary of the key hydrological and water quality parameters across three temporal scales spanning 22 years (2000-2022).WQI is a mathematical expression that transforms large and complex datasets into a single quantitative value, enhancing water quality [52].Based on the value of the water quality indicators and assigned weightage, the WQI was calculated in our previous work related to this study.In the computation of WQI, the maximum weightage 5 (on a scale of 1-5) was assigned to NH 3 -N and PO 4 3− ; TDS and pH had a given weightage of 4, and the lowest weightage was assigned to turbidity [40].The weight assigned to the five selected water quality parameters was based on different authorised standards and their potential impact on surface water pollution [40,53,54].
Weekly rainfall averaged 14.41 mm, ranging widely from 0 to 457 mm, while weekly streamflow showed a mean of 1.08 m 3 /s, with a maximum value of 270.98 m 3 /s, indicating substantial variability.Monthly data exhibited a higher average rainfall (64.22 mm) and showed the same streamflow (1.08 m 3 /s), although the maximum value was notably lower compared to weekly extremes.Seasonal patterns revealed that spring and summer recorded the highest average rainfall (175.50 mm and 338.80 mm, respectively) and streamflow (0.24 m 3 /s and 2.81 m 3 /s), while the winter value was lower (0.19 m 3 /s).The pH of water remained consistent at an average of 7.86 across both weekly and monthly scales.The maximum pH was observed in weekly data (8.90),indicating an increase in alkalinity.The average value of PO 4 3− and NH 3 -N ranged from 0.01 to 0.02 and 0.02 to 0.04, respectively, across the three temporal scales.The turbidity was found to be maximum at the monthly scale (10.45), while rainfall was 477.60 mm.Moreover, in the weekly data, the TDS was maximum (325), and the WQI had a mean of 8.09 with values ranging from 4.03 to 12.38, which were slightly higher in the monthly data, ranging from 5.49 to 11.17.The mean value of WQI was 8.1 with the values ranging from a minimum of 5.86 to a maximum of 10.58 across the four seasons.The seasonal statistics of WQI indicated that the WQI was generally consistent over different seasons.Overall, these descriptive statistics illustrate significant variations across different temporal scales, providing insights into the influence of rainfall volume and streamflow on water quality parameters.
Variation in Water Quality Indicators
This study did not explore and compare all possible combinations of predictors, rather it focused on using the GAM methodology to observe the influence of rainfall and streamflow on the variation in water quality parameters.Water quality parameters often exhibit profound seasonality along with temporal changes.Further, it is also recognised that within a watershed, the primary source of water (such as baseflow versus surface runoff) and its subsequent impact on water quality can fluctuate over time [9].Therefore, this study examined these dynamics across three temporal scales, such as weekly, monthly, and seasonal, to observe the responses of water quality indicators.The response of each parameter was measured individually using the GAM model.The GAM model was applied to the raw data without altering the actual values to observe the true impact.The results derived from the GAM analysis are presented in Table 2.The model intercept, which represents the baseline level assuming no influence from the predictor variables, was found to be significant (p < 0.001) across all temporal scales.
The GCV score in the 'mgcv' package in R can be taken into consideration for selecting the appropriate level of smoothness and serves as an estimator of prediction error, with smaller values indicating a better fit of the model [55].The GCV scores in the developed GAM models varied across different temporal scales and water quality parameters, and the average values were 0.67, 0.59, 0.48, 0.30, 0.50, and 0.38 in the weekly, monthly, and four seasonal scales (autumn, winter, spring, and summer), respectively.These values indicated that the model fitted with appropriate smoothing to the data and minimised the risk of overfitting.In the weekly scale, the lowest GCV score was observed for the PO 4 3− model, which was found to be better compared to the turbidity (1.94) and WQI (1.77) models.The values on the monthly scale also achieved a good balance in capturing the underlying patterns in the data, where the PO 4 3− and NH 3 -N models exhibited the lowest values (close to zero), and the turbidity (1.65) and WQI (1.68) models had the highest values.The values of seasonal scales indicated that the model's performance might vary due to seasonal changes.
The smooth functions of each covariate were found to be most significant in weekly data.The effective degree of freedom (edf) for rainfall and streamflow are presented in the last two columns (five and six) in Table 2, and the significance of each edf is almost consistent for pH, turbidity, and TDS.edf is a summary statistic of GAMs which represents the degree of nonlinearity of a smooth function.An edf of 1 corresponds to a linear relationship, while an edf greater than 1 but less than or equal to 2 signifies a weakly nonlinear relationship, and an edf greater than 2 indicates a highly nonlinear relationship.Based on the edf values, highly non-linear relationships were observed between streamflow and turbidity, and TDS and WQI.However, a significant linear correlation was observed between pH and streamflow, particularly in weekly and autumn seasonal data, where the edf values ranged from 1 to 2.86.
This finding strongly agrees with the results from previous studies, where pH was found to increase significantly in surface waters adjacent to agricultural lands compared to urban and natural landscapes, as well as from paved and unpaved forest roads [56,57].In the case of PO 4 3− , there was a nonsignificant linear relationship between discharge and rainfall.However, during the winter season, a significant nonlinear (5.28) relationship was observed.Moreover, NH 3 -N showed a nonsignificant linear relationship in the weekly and monthly scales, as well as in the winter and spring seasons, but exhibited significant nonlinear variations during the summer (5.15) and autumn (8.79) seasons.Finally, WQI showed a linear relationship in three seasons (autumn, spring, and summer) and a weakly linear relationship in the winter season.
On both the weekly and monthly scales, WQI showed a significant nonlinear variation.Short time variations often reflect the impact of storms or dry spells, leading to substantial, however, temporary changes, and the seasonal variability underscored the influence of prolonged climatic conditions on water quality parameters and associated WQI.From the analysis of weekly and monthly outputs, it was observed that the significant nonlinearity of pH, TDS, and turbidity collectively caused the significant nonlinearity of WQI, whereas NH 3 -N and PO 4 3− exhibited a minimal influence.The notable response of pH, TDS, and turbidity to streamflow played a significant role in influencing WQI, highlighting the importance of monitoring these parameters during heavy rainfall.Storm water runoff increases turbidity levels in both surface and subsurface flows, which sometimes exceeds the upper limits as recommended in water quality guidelines.High TDS levels in surface water are often the results of agricultural runoff, unsustainable farming practices, uncontrolled animal grazing, and wildlife influences.Tourist destination places, near water sources such as recreational parks, can gradually contribute to an increase in the concentration of dissolved solids over time [58][59][60].This analysis provides insights into the dynamic relationships between rainfall, streamflow, and various water quality parameters across different temporal scales.
Performance Analysis of the WQI Prediction Model
In this study, six models were developed to predict WQI using streamflow as input data on three temporal scales (weekly, monthly, and seasonal).Among the six prediction models, the XGBoost-BO optimised model provided the best accuracy on monthly aggregated data.The plots of observed and predicted data during both the training and testing phases of the proposed model are summarised in Figures 3 and 4.
Upon examining the comparison plots of different temporal scales, it was observed that during the training phase (Figure 3), there was a strong agreement between the actual and predicted values, particularly notable in the weekly and monthly data, where the lines closely aligned.However, the seasonal data (autumn, winter, spring, and summer) also showed a satisfactory match, although slight deviations could be noted.In the subsequent set of plots (Figure 4) during the testing phase, the weekly and monthly data showed a noticeable discrepancy, with the predicted values often either underestimating or overestimating the observed values.
Performance Analysis of the WQI Prediction Model
In this study, six models were developed to predict WQI using streamflow as input data on three temporal scales (weekly, monthly, and seasonal).Among the six prediction models, the XGBoost-BO optimised model provided the best accuracy on monthly aggregated data.The plots of observed and predicted data during both the training and testing phases of the proposed model are summarised in Figures 3 and 4. As described in the methodology section, Bayesian optimisation was applied to optimise the parameters to develop the XGBoost regression model to predict the WQI.BO only optimises the machine learning model during the training phase, and hyperparameter optimisation can vary with different datasets.The optimum hyperparameters identified by BO are those which perform best at the cross-validation of training data, without certainty that they perform best on the testing data [61].The prediction performance of the model was assessed using three accuracy metrics such as R 2 , RMSE, and MAE.Table 3 shows the values of these accuracy metrics for both the training and testing phases of the proposed models.
Table 3 and Figure 5 Upon examining the comparison plots of different temporal scales, it was observed that during the training phase (Figure 3), there was a strong agreement between the actual and predicted values, particularly notable in the weekly and monthly data, where the lines closely aligned.However, the seasonal data (autumn, winter, spring, and summer) also showed a satisfactory match, although slight deviations could be noted.In the subsequent set of plots (Figure 4) during the testing phase, the weekly and monthly data showed a noticeable discrepancy, with the predicted values often either underestimating or overestimating the observed values.
As described in the methodology section, Bayesian optimisation was applied to optimise the parameters to develop the XGBoost regression model to predict the WQI.BO only optimises the machine learning model during the training phase, and hyperparameter optimisation can vary with different datasets.The optimum hyperparameters identified by BO are those which perform best at the cross-validation of training data, without certainty that they perform best on the testing data [61].The prediction performance of the model was assessed using three accuracy metrics such as R 2 , RMSE, and MAE.Table 3 shows the values of these accuracy metrics for both the training and testing phases of the proposed models.The mean absolute error (MAE) values further elucidate the model's performance, where the lower MAE values during the training phase (ranging from 0.08 to 0.58) suggest a high level of accuracy in the model's predictions.In contrast, higher MAE values were observed during the testing phase, particularly for the monthly data (1.44), indicating greater discrepancies between the observed and predicted values during this period.
The root mean squared error (RMSE) values reinforce these findings, with the training phase showcasing relatively low RMSE values (ranging from 0.22 to 0.61), reflecting the model's robust performance.Conversely, the testing phase exhibits higher RMSE values, especially for the winter and autumn periods (1.86 and 1.62, respectively).Table 3 and Figure 5 The root mean squared error (RMSE) values reinforce these findings, with the training phase showcasing relatively low RMSE values (ranging from 0.22 to 0.61), reflecting the model's robust performance.Conversely, the testing phase exhibits higher RMSE values, especially for the winter and autumn periods (1.86 and 1.62, respectively).
Discussion
The GAM model applies an additive link function to enumerate the relationship between the response variable and the nonparametric predictor variables [47].In this study, each water quality parameter was considered a response variable, while rainfall and streamflow were the predictor variables.The application of the generalised additive model (GAM) proved to be an effective and insightful way of characterising the complex,
Discussion
The GAM model applies an additive link function to enumerate the relationship between the response variable and the nonparametric predictor variables [47].In this study, each water quality parameter was considered a response variable, while rainfall and streamflow were the predictor variables.The application of the generalised additive model (GAM) proved to be an effective and insightful way of characterising the complex, nonlinear relationships between the individual water quality parameters and hydrological variables in our study.We visualised these relationship and quantified their significance, aligning with previous studies that correlated streamflow metrics, rainfall, and flow diversion with water quality variations [13].This evaluation can facilitate easy interpretation across different covariates and models, which is particularly useful for communicating findings to nonspecialised audiences [62].
The WQI provides a standardised statistical approach to support the assessment of management strategies and the identification of areas that require reform [63].A thorough understanding of the temporal dynamics of lake or reservoir water is crucial because it allows water quality managers to identify factors driving changes in water quality, predict future conditions, and take targeted measures [64].The detailed analysis of the variability in water quality parameters across weekly, monthly, and seasonal scales reveals that weekly and monthly variations often imitate acute events such as storms or dry periods, which may cause temporary, but significant, changes in water quality.However, conducting a monthly analysis helps to identify the persistence of certain water quality issues which may not be evident on a weekly scale.This is because the cumulative effects of rainfall and streamflow over a month show more stable patterns.The assessment of seasonal patterns captures the impact of different climatic conditions (autumn, winter, spring, and summer) and provides insights into how prolonged periods of rainfall and dry seasons influence water quality, which is vital for developing long-term water management strategies.
Moreover, an ensemble machine learning model combining XGBoost and a BO algorithm was proposed for the first time to predict WQI, considering discharge as the input parameter, with an average accuracy of 85% (R 2 ), MAE of 0.33, and RMSE of 0.42.The XGBoost model's distinct advantages include mitigating overfitting, parallelised tree building, and portability [39].The results presented by combining the statistical methods and machine learning models across different temporal scales can easily be compared and understood.
However, our approach did not account for other factors such as soil classification and flow diversion during correlation analysis, and the WQI prediction relied solely on a single dominant variable.Furthermore, the analysis and results were based on observational data, which may introduce uncertainties.The exclusion of additional variables could have underestimated the overall impact on water quality.An improvement in this approach may include considering more variables alongside streamflow for the prediction of WQI.While the proposed model performed well during the training phase, further refinement is necessary to achieve a comparable accuracy during the testing phase.Another limitation of our study was that it considered a single case study due to the unavailability of the streamflow data of other reservoirs.Future study is recommended to use remote sensing data and machine learning algorithms to estimate streamflow and WQI prediction.
Conclusions
Our study leveraged a generalised additive model (GAM) to explore the correlations between hydrological variables (rainfall and streamflow) and various water quality parameters (PO 4 3− , N_NH 3 , pH, turbidity, TDS, and WQI).This study introduced an optimised ensemble model XGBoost-BO designed for predicting WQI which can improve the accuracy and reliability in forecasting WQI values, offering a deeper understanding of the factors influencing water quality.The Cressbrook Reservoir was selected as the representative case, and the applicability of our method was verified at different temporal scales.The main findings of this study can be summarised as follows:
•
The GAM results reveal significant correlations between streamflow and several water quality parameters.Specifically, on a weekly temporal scale, turbidity, TDS, and WQI showed a significant nonlinear relationship with discharge, which indicates that shortterm variations in runoff may have pronounced effect on these parameters.On the other hand, pH, PO 4 3− , and NH 3 -N showed a linear relationship with discharge.The high sensitivity of turbidity and TDS to discharge suggests that managing flow rates and reducing runoff during storm events could be crucial in water quality management.
•
On a monthly basis, streamflow exhibited smoother relationships for most parameters but still influenced TDS and WQI nonlinearly.These correlations highlight the sustained influence of hydrological variables over longer periods.
•
Seasonal analysis provides further insights; in autumn and winter, NH 3 -N and PO 4 3− displayed high edf values, respectively.However, pH showed a linear and WQI exhibited a weakly linear to linear relationship with discharge over four seasons.The seasonal interrelationship of various water quality parameters with the hydrological variables implies that management practices need to be adjusted seasonally to address the specific challenges posed in each period.• The accuracy metrics of the WQI prediction model using XGBoost-BO, as previously discussed, are consistent with these findings.The model's performance varies across different temporal scales, exhibiting a higher accuracy during the training phase compared to the testing phase.This variation underscores the complexity of predicting water quality, influenced by the dynamic interplay of hydrological variables.
Understanding the temporal dynamics of rainfall and streamflow and their influence on water quality is paramount for developing effective management strategies, particularly amidst climate change challenges.Surface water quality can fluctuate in response to climate disruptions such as extreme rainfall and droughts, such as the dilution and concentration of water quality parameters and physical processes like bank erosion.Additionally, water quality is influenced by the interactions of surface runoff with organic matter on the land [65,66].Heavy precipitation causes increased runoff which may increase water contamination and public health concerns [66].In our previous study, the water quality parameters were selected based on the fact of which parameters were being affected due to extreme rainfall events as a result of climate change [40].The findings of this present study suggest that addressing significant climate impacts and site-specific determination and modelling with long-term precipitation, streamflow, and water quality data is essential to achieving sustainable water quality objectives.By applying a similar comprehensive analysis and modelling, the regional evaluation of climate impacts on water quality could be explored.
Additionally, understanding how water quality parameters vary across different temporal scales in response to rainfall and streamflow enables local authorities to develop more effective and targeted strategies for maintaining and improving water quality.Specifically, weekly responses can help to detect and respond to sudden or critical events swiftly, while monthly analyses provide guidance for medium-term interventions to address significant water quality issues.Seasonal observations offer valuable insights into the long-term effects of climatic variations, enabling policy makers to implement proactive measures in anticipation of seasonal changes.The proposed multi-temporal approach ensures a comprehensive understanding of the measures for the development of adaptive strategies and management practices that are responsive to both short-term and long-term fluctuations.
Figure 1 .
Figure 1.Study area map of the Cressbrook Reservoir catchment.The blue dashed lines show the Cressbrook Reservoir catchment area on the Australia Map (right).
Figure 1 .
Figure 1.Study area map of the Cressbrook Reservoir catchment.The blue dashed lines show the Cressbrook Reservoir catchment area on the Australia Map (right).
(i) Data preparation and processing:
Figure 3 .
Figure 3. Plot of observed and predicted data in the training phase of the XGBoost-BO model.
Figure 3 .
Figure 3. Plot of observed and predicted data in the training phase of the XGBoost-BO model.
represent a comprehensive overview of the performance metrics for the proposed WQI prediction model across different time scales during both the training and testing phases.The R 2 values indicate a strong correlation between the observed and predicted WQI values during the training phase, ranging from 0.75 to 0.96 across various time periods.However, the testing phase shows a moderate decline, with R 2 values ranging from 0.52 and 0.70, highlighting a slight reduction in model accuracy.
Figure 4 .
Figure 4. Plot of observed and predicted data in the testing phase of the XGBoost-BO model.
Figure 4 .
Figure 4. Plot of observed and predicted data in the testing phase of the XGBoost-BO model.
represent a comprehensive overview of the performance metrics for the proposed WQI prediction model across different time scales during both the training and testing phases.The R 2 values indicate a strong correlation between the observed and predicted WQI values during the training phase, ranging from 0.75 to 0.96 across various time periods.However, the testing phase shows a moderate decline, with R 2 values ranging from 0.52 and 0.70, highlighting a slight reduction in model accuracy.
Figure 5 .
Figure 5. Strip plot of the accuracy metrics at six different temporal scales.The mean absolute error (MAE) values further elucidate the model's performance, where the lower MAE values during the training phase (ranging from 0.08 to 0.58) suggest a high level of accuracy in the model's predictions.In contrast, higher MAE values were observed during the testing phase, particularly for the monthly data (1.44), indicating greater discrepancies between the observed and predicted values during this period.The root mean squared error (RMSE) values reinforce these findings, with the training phase showcasing relatively low RMSE values (ranging from 0.22 to 0.61), reflecting the model's robust performance.Conversely, the testing phase exhibits higher RMSE values, especially for the winter and autumn periods (1.86 and 1.62, respectively).
Figure 5 .
Figure 5. Strip plot of the accuracy metrics at six different temporal scales.
Table 1 .
Descriptive statistics of water quality parameters, rainfall, and streamflow.
Table 2 .
Output of the GAM model for WQ parameters.
Table 3 .
Summary of the performance metrics of the regression models at six different temporal scales.
Table 3 .
Summary of the performance metrics of the regression models at six different temporal scales. | 10,722.6 | 2024-07-25T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering"
] |
Shot-earth for sustainable constructions
Earth has been used worldwide as a building material for centuries and it is still one of the most used construction materials. In many countries the excavated soil is becoming one of the largest construction waste and its disposal is costly and problematic. For this reason, there is a rising interest in employing the excavated soil directly in field, possibly as an added value construction material. In this paper a new type of rammed earth is presented. This new material is based on the shotcrete technology and has been named shot-earth. A mix of stabilized soil, aggregates and water is consolidated by high speed projection rather than by mechanical compaction to obtain both structural and non-structural elements. The first characterization of the physical properties of this material has shown the great potential of this technology. © 2019
Introduction
Soil has been used to construct for centuries with different methods and technologies. Largely replaced by others materials, the soil is nevertheless still in use in many areas of the world (see Fig. 1) and it is still one of the most used construction materials. In many areas of the world, such as France, the soil is particularly adapted to construct because it contains an appropriate quantity of clay. The earth construction has demonstrated to be durable in many contexts as shown by the ancient city of Shibam (see Fig. 2a) entirely construed in soil and still populated. Furthermore, many architects have succeeded in using earth to construct modern and durable buildings (see Fig. 2b).
The vernacular construction techniques [7] have evolved such that today are available in the market products such as the "earth concrete" [20]. Among the "earth concretes" that have reached a certain popularity there are the Alker and the Cast Earth [40]. Researchers have found a method to produce self-levelling earth concrete based on the use of clayey soil and CSA binders [9,16]. Many are also the applications of soil placed by projection, most of these are developed for rendering but attempts to constructs walls and houses using projection have been made [3]. Not all the soil is adapted to construct and in these cases other construction techniques have been developed and used (stone and brick masonry, wood, etc.). In other cases the performances of soil have been improved by stabilization [7,2]. In the past the stabilization of soil was performed by instance by adding straw, rosins and ara E-mail address<EMAIL_ADDRESS>(L. Lanzoni) bic gums while today the stabilization of soil is made by adding binder such as lime, gypsum, different types of cements and magnesium oxides. The high energy compaction methods can also be viewed as a form of stabilization [1]. Stabilization is fundamental to improve a soil that is not adapted for construction and it is widely studied worldwide. In particular, Fig. 3 shows that enhancing the mechanical performances (particularly in terms of strength and durability) of crude earth by manipulating its clay fraction might be an effective low-cost approach to avoid various drawbacks linked to the use of Portland as stabilization [20]. Nevertheless, it is remarked that this might be true for clayey soils.
Despite the renewed interest on the soil construction, the codes and practices for structural design remain schematic for vernacular and modern soil-based structures. The technique presented in this paper, named "shot earthcrete" or "shot-earth" is a new technology based on the high-speed projection (spraying) of a mix of stabilized soil, aggregates and water. Being based on a dry process, the quantity of water in the mix is low and the quantity and type of the stabilization is chosen according to the quality of the excavated soil and the application targeted. Given the lack of norms and codes of practice the characterization of the shot-heart is therefore mandatory in order to understand the behavior of this new material under load. In this research a particular emphasis was paid to the following issues: -Shot-earthcrete as construction material; -influence of the placing process on the shot-earthcrete; -earthcrete as construction technology. The experimental campaign focused firstly on the identification on the most important mechanical parameters such as ultimate compressive and tensile strengths, Young modulus and Poisson ratio. In a sec https://doi.org/10.1016/j.conbuildmat.2019.117775 0950-0618/© 2019. ond testing campaign the behavior of the shot earthcrete as a structural material was studied on wall-like specimens tested under compression and shear loads.
Experimental program
In order to design a load carrying element, some mechanical parameters are needed [21,22]. For concrete the relationships between many mechanical properties are well known and therefore often it is sufficient the value of the compressive strength to derive most of the other physical properties. The shot-earth can be considered as a low strength concrete but this could not be assumed before testing. Therefore, standard practices for testing concrete and masonry were adopted to determine parameters such as the Young modulus, Poisson ratio, shear modulus and tensile strength [24][25][26][29][30][31][32][33][34]. The experimental program consisted of two phases: the first one aimed at testing prismatic specimens; the second one devoted to investigate both axial and diagonal compression of walls samples. All specimens were cured at ( ) of temperature and of relative humidity and then tested at 28 days. During the drying process the weight loss was monitored with the aid of a thermal camera.
Materials and methods
Shot-earth consists of a dry mix of soil, cement and coarse sand (size 0-8 mm) propelled through a nozzle. The size of the sand and the mix design are determined according to the composition of the excavated soil. In this case the mix proportions were 7/7/2 (7 soil, 7 sand and 2 cement) ratio by weight in the dry mixture. This mix was studied to obtain a strength sufficient to construct vaults and walls, without altering the color of the final product. The mixture is pressurized into a properly designed machine and conveyed through a hose to the spraying nozzle by a high velocity air-stream. About 3% (by volume) of water is injected in the nozzle of water to obtain a certain degree of cohesion and promote the hydration of the cement grains. Water has to be added in a quantity that permits adhesion of the mix when shot on the mould and to avoid that the mix do not held in place. Furthermore, water should not be in excess to prevent shrinkage. The projection methodology is fundamental to obtain a good result. Two projection methods were tested, one overhead on a closed mould (see Fig. 4b), one on a vertical surface with an angle of approximately 45 (see Fig. 4a). The overhead method proved to be the less effective since it promoted a chaotic movement inside the mould with segregation of the mix (see Fig. 5b). Furthermore, during the spraying process a cloud of dust formed inside the mold, thus preventing the nozzle-man to see where flow of the material should be directed. The overhead technique is therefore more interesting when used on large horizontal surfaces rather than in vertical closed moulds. None of the above-mentioned problems were encountered while using the side projection method, which was therefore chosen for ensuing phases of the testing campaign.
The machine used to shot the stabilized earth is a modified twin chamber machine, similar to the one showed in Fig. 6. This equipment is generally used to shot refractory materials, mixes of dry sand and cement; it is a so-called dry process machine and its production rate equals 10 . This type of dry spray machine is appreciated by practitioners because of its steady rate of feeding into the air stream. This feature allows maintaining a constant water cement ratio and a constant rate of shooting: An unsteady air stream and the ensuing pulsation might cause segregation problem with loss of strength of the material. The dry process also permits to have an excellent "green strength" since the mixture is well compacted and self-sustaining material as soon as it is placed. Therefore, the surfaces can be immediately finished by hand or mechanically, without risks of damaging the structural elements.
Shot-earth is a method to construct structures and manufacture construction products using soil and also a method to valorize the excavated soil. Basically the soil used in construction should not have a large content of organic matter, therefore 25/50 cm of topsoil should always be removed. The topsoil is also precious for other applications and it should not be damaged or polluted. The presence of pollutants should be checked carefully with techniques such as XRF, XRD and other chemical analyses. Furthermore, the excavated soil should be let dry and then undergoes through a sieving and screening process. Sieving and screening allows removing all aggregates present in the soil and screening will help to obtain an optimal size of the soil particles. Gravel and soil thus obtained are then used to formulate the mix of the shot-earth. In this case a cement CEM I 42.5N was used for stabilization.
Specimens
For this testing campaign several specimens were manufactured ( [23,27]) and, in particular, two large walls (1 × 1 × 0. 3 ) were prepared in order to check the projection method (head on or side) and to extract cores (see Fig. 5a) for direct traction test, thus assessing the quality of the material. Specimen sizes and their use are listed in Table 1.
The drying process of specimens was monitored by means of weighting and by means of thermal camera images (see Fig. 7). The drying process of the specimens was carried out at of temperature and of relative humidity (RH). The specimen weight was monitored using an electronic scale. Fig. 8 illustrates the weight loss in time in order to describe the drying process and the shot-earth curing: From the shot-earth casting, approximately 20 days elapsed before achieving a constant weight of about 132 kg. Therefore the specimen lost around 6.4 kg as the result of drying process [28]. The shot-earth walls manufactured had a bulk density of about 2070 .
Compressive Test
The compressive strength was determined by using standard test procedure for concrete. In fact this shot-earth mix has shown mechanical properties that resemble those of a low strength concrete.
The machine used for this test was a W + B LFV 200 kN apparatus (see Fig. 10). The compressive test was carried out on five 15 × 15 × 15 cubes cured for 28 days. The strength values are listed in Table 2. The failure mode, characterized by the formation of a cone, is admitted by codes and in general the specimens have a brittle failure after achieving their maximum compressive stress (see Fig. 9).
Young modulus
Young modulus was determined according to EN 12390-13 [36]. The test method allows determining two moduli of elasticity: The initial modulus measured at first loading, and the stabilized modulus measured after three loading cycles (see Fig. 11). The strain evaluation was based on the curve, with three repetition of loading for measuring the time effect. The corresponds to the secant slope passing through the origin and to the ordinate point 0.33 1 , namely . Results listed in Table 3 shown the stabilized Young modulus, which was computed between 5 and 33% of by linear fitting; it showed relatively low scattering and varied between 9638 and 11980 is the proportion of the variance in the dependent variable predictable from the independent variable(s). Stress-strain curves and line of linear regression are depicted in Fig. 12. The linear regression is a linear approach for modelling the relationship between scalars. The slope of the trend line represents the Young modulus, obtained by linear regression.
Poisson ratio
For evaluating the Poisson ratio, two transducers were placed orthogonally to the load direction and on the opposite cube sides were used for measuring both the transverse and longitudinal strains (see Fig. 13). 1 denotes the ultimate compressive strength. The load system was set in displacement control with repetition of three cycles of loading and unloading (for the time effect), assuming to be in linear field and considering the range up to . The determined values of the Poisson ratio are listed in Table 4. It should be noted that showed high scattering because values varied between and . The rison of this relatively high scattering of the Poisson ratio lies in the progressive breakdown of the specimen as the load increases.
Direct tensile test
Under a direct tensile load test, the shot-earth has shown an elastic-brittle behavior, thus the tensile branch may be well described by a linear constitutive law until the brittle failure according to the classical formula , being the elastic modulus of the soil-cement mixture (after curing) and is the axial strain. The direct tensile strength test consists of applying an increasing traction force until complete failure. Under pure traction load, the tensile strength value is measured as the ratio between the applied load and specimen area. The direct tensile strength test provides more representative values than the flexural tests. Three shot-earth cylinders of 150 mm in diameter and 300 mm height, cored from existing walls, were tested under direct traction. The average strength of the specimen was of about 11 MPa. Because of the notch, the middle cross section was reduced by 26%, see Fig. 14 2 . The stress was calculated as the ratio between the applied tensile load and the area of the notched cross section of the specimen. Table 5 summarizes the mechanical properties of the shot-earth obtained from direct tensile tests. The average strength was found to be 1.159 MPa. Two extensometers with a gauge length of 38 mm were set to measure the longitudinal displacements.
The Fig. 15 shows the stress-strain curve of the specimen under direct tensile test.
Three points flexural test
In measuring the tensile strengths of brittle materials, the direct test method might be difficult to implement, inaccurate and costly [39]. These are the reasons why, when a material is already well known, the indirect tensile test is often used for quality control and characterization purposes. A typical three-point loading bending test [37] set-up is shown in Fig. 16. The maximum bending tensile stress is calculated under the assumption that the neutral axis is at mid-height of the cross-section and the stress distribution is triangular. The modulus of rupture, that is also defined as the bending tensile strength [38], can be measured using the classical formula . 2 The depth of the notch is approximately 10 mm, thus the reduced area of the cross section turns out to be . Table 6 summarizes the flexural modulus of rupture of the shot-earth specimens. It should be noted that shows relatively low scattering and it varied between 2.281 and 1.759 MPa. Tensile strengths obtained by indirect tensile test is higher, by a factor of two or more, than those obtained by the conventional direct test [39].
Evaluation of experimental results
Analyzing the compression stress-strain diagram up to a third of the strength, the behavior of the material can be considered as linearly elastic. At equal to 70% of the maximum compressive strength, the curvature increases rapidly (hardening) and, after achieving the maximum stress, the diagram shows a softening branch until the failure point, as depicted in Fig. 9. A loosening of the internal structure and an increase of the transverse strain is recorded after the stress reaches 0.7 .
The tensile strength of soil-cement depends by the test method. The values of the direct tensile strength recorded during this test campaign are coherent with those reported in literature [35]. The ratio between tensile and compressive strengths is about as shown in Table 7. In summary it is possible to affirm that the shot-earth tested has the mechanical characteristics of a low strength concrete (see Table 7). It is, however, necessary to highlight the fact that the concrete-like behavior of the shot earth must be further confirmed, in order to safely use the RC concrete design practices for calculating the shot earth elements. This could also lead to applying the same strengthening and maintenance strategies used for concrete to shot earth structures [13].
Walls
The data of the first test campaign on walls highlighted that the frontally spraying methodology yields the best results and therefore this placing method was retained. Three walls were prepared and tested, two under axial compression and one under diagonal compression.
The two walls tested under compression were designed with the dimension of 800 × 800 × 100 and one of them was reinforced by a steel mesh in each side. The third wall was manufactured with the dimension of 500 × 500 × 110 according to ASTME519/E519M-15, the standard test method for diagonal tension [33].
Axial compression test of walls
Before testing the walls under compression, the top surface was rectified by a rapid set cement mortar. The load applied to the specimen was distributed with a steel profile placed at the top surface. Linear variable differential transducers (LVDT) with a gauge length of 250 mm were placed on both faces of the specimen for measuring both longitudinal and lateral displacements. The geometry of the supports and disposition of LVDTs are shown in Fig. 17. The axial stress-strain curve (see Fig. 18) for the unreinforced shot-earth wall has shown a linear behavior in the first part and then a progressive decrease in stiffness until the maximum load of about 756 kN was achieved. The modulus of elasticity E equals to about 4418 MPa and it was computed on the range 5% 30% of . In general the wall exhibited a brittle failure in short time after achieving the maximum compressive stress. As de Fig. 18, the positive values represent longitudinal strain and the negative values represent transverse strain. The reinforced wall was manufactured for the sole purpose of evaluating the shot-earth behavior with steel reinforcements in terms of technology application, workability and soil-cement/steel interface. Regarding the reinforced wall, the failure occurred without achieving the maximum compressive strength due to the concrete cover debonding and the buckling of steel rebars. This is the reason because the wall without reinforcements exhibited an ultimate load (756 kN) greater than that achieved by the reinforced wall (equal to 623 kN). The axial stress-strain curve for the reinforced shot-earth wall (see Fig. 19) is still in the elastic branch with a Young modulus of 7406 MPa and with axial deformations in the range of before failure. Table 8 summarizes the mechanical properties of both walls tested under axial compression. In general the unreinforced walls exhibited a brittle failure in a short time after achieving the maximum compressive load. In the elastic field, the reinforced wall has shown greater axial rigidity since the beginning of the test, and this highlights that the steel reinforcement could improve the shot-earth performance. By analyzing the broken specimens it is evident that the shot-earth did not has any problems to get through the steel cage and no segregation effect occurred (see Fig. 20).
Diagonal compression test
This test method was developed to measure more accurately the diagonal tensile strength by loading the wall in compression along one diagonal, thus inducing a diagonal tension failure with the specimen splitting apart parallel to the direction of load.
The diagonal compression test was performed according to the ASTM E519-15 [33]. The test set-up provides the layout of a compression load piston on the top surface with a maximum load of 300 kN. Two linear differential transducers (LVDT) were placed along the diagonals of both faces of the specimen as showed in Fig. 21. The test was carried out under displacement control at a rate of 0.6 mm/s. The purpose of the diagonal compression test is to identify the shear mechani cal parameters such as the ultimate shear strength and the shear modulus G. While shear modulus measurements are considered accurate, the measure of the shear strength is more complex. The presence of non-pure shear loading, non linear behavior, edges, material coupling and presence of normal stresses make questionable the evaluation of the shear strength.
However, according to [33], the shear stress can be calculated as (1) being P is the load applied to the wall and A is the area of the specimen. The shear strain is calculated as follows: ( 2) where is the shearing strain, is the vertical shortening, is the horizontal extension and g is the gage length. Accordingly, the shear modulus turns out to be . Fig. 22 displays the shear stress-strain curve of the wall whereas Fig. 23 shows the diagonal deformation during time.
The shear mechanical parameters are listed in Table 9. Assuming an elastic behavior of the material, G was measured between the 5 and 33% of . Failure of the specimen was preceded by the appearance and consecutive propagation of a crack that crossed diagonally the specimen as showed in Figs. 24a-b. Just before collapse, a system of running cracks developed, thus causing the complete failure 3 .
Conclusion
The shot-earth is a new and sustainable construction material consisting in a mix of excavated soil, sand and water placed by high speed projection (dry process). In this case the shot earth was stabilized in order to improve its mechanical properties. The construction material obtained reveals good mechanical properties, which resemble those of a low-strength concrete. The shot-earth spraying technology is very flexible and adapted to a wide range of non-structural and structural applications such as curved, free-formed and form-resistant structures. The experimental investigation accomplished in this work leads to the following main conclusions: • Excavated soil can be used as a construction material provided that its characteristics are known and a proper stabilization is used; • the high-speed projection allows for optimal compaction and homogeneity of the material, provided that the projection is performed frontally on an open mould; • it might be argued that the mechanical behavior of shot-earth is similar to that of a low strength concrete; • the stabilization rate and type can be changed in order to fit the specificity of each application of this new material; • the shot-earth increases the sustainability and circularity of the construction market by using a high rate of excavated soil in field, thus reducing the logistic and the supply of other construction materials.
Further studies are carried out to corroborate the results achieved in the present paper and to investigate other properties such as the shrinkage, creep and durability of this innovative material 4 . 3 Recent works concerning the modeling of damage at large deformations can be found in [11,12,14]. 4 The mechanical behavior of rammed earth could be improved by inserting fibres into the mixture at the mixing stage. Recent works about cementitious composites reinforced by a steel fabric or discrete fibres be found in [4,5,[17][18][19]10,15], respectively. Possible applications to improve the building foundations could be investigated too [6].
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 5,356.4 | 2020-04-10T00:00:00.000 | [
"Geology"
] |
Topology via Spectral Projectors with Staggered Fermions
The spectral projectors method is a way to obtain a theoretically well posed definition of the topological susceptibility on the lattice. Up to now this method has been defined and applied only to Wilson fermions. The goal of this work is to extend the method to staggered fermions, giving a definition for the staggered topological susceptibility and testing it in the pure $SU(3)$ gauge theory. Besides, we also generalize the method to higher-order cumulants of the topological charge distribution.
INTRODUCTION
The properties related to the existence of field configurations with non-trivial topology, and the associated non-trivial dependence on the topological parameter θ, represent some of the most significant non-perturbative aspects of QCD and QCD-like theories. Monte Carlo simulations on a lattice are the most natural first principle tool to investigate such properties, however the fact that on a discrete space-time homotopy classes are not well defined makes the issue non-trivial. In principle, many definitions of topological charge can be assigned in the discretized theory, all consistent with each other in the continuum limit, however discretization errors can be different depending on the choice.
The topological charge in continuum Yang-Mills theories is defined in terms of gluon fields as follows: where q(x) is the topological charge density, and is integer valued when proper boundary conditions are taken (e.g., periodic boundary conditions for a finite box, or vanishing action density at infinity). The index theorem [1] then relates Q to fermion field properties, in particular where n ± are, respectively, the number of left-handed and right-handed zero-modes of the Dirac operator / D. The possible lattice discretizations can be divided essentially into two different classes: gluonic and fermionic.
Gluonic definitions are based on a straightforward rewriting of Eq. (1) in terms of lattice gauge links. Despite having the correct naïve continuum limit, they are non-integer valued and subject to renormalizations induced by ultraviolet (UV) fluctuations. In particular, correlation functions of Q L must be renormalized both additively [2] and multiplicatively [3] in order to match the corresponding continuum quantities apart from finite O(a) corrections (where a is the lattice spacing): this is * Electronic address<EMAIL_ADDRESS>† Electronic address<EMAIL_ADDRESS>‡ Electronic address<EMAIL_ADDRESS>§ Electronic address<EMAIL_ADDRESS>the case for the topological susceptibility, χ ≡ Q 2 /V , as well as for the higher order cumulants of Q [4] which enter the coefficients of the Taylor expansion of the free energy density in θ. Alternatively, one can make use of smoothing methods which dampen UV fluctuations of gauge fields while leaving the global topological background unchanged, thus leading to an approximately integer valued topological charge: various similar methods have been proposed, such as cooling [5] or the gradient flow [6,7], all leading to equivalent results [8,9].
Fermionic definitions, being based on a counting of zero modes, are in principle better founded, and would be realized in practice by a simple evaluation of the trace of the γ 5 operator on a basis of eigenvectors of the Dirac operator. However, also in this case one has to face problems related to the difficulty in implementing fermions with the correct chiral properties on a lattice. The best approximation is provided by discretizations of the Dirac operator satisfying the Ginsparg-Wilson relation [10], an example being the overlap operator [11], which satisfies an approximate chiral symmetry [12], leads to a counting of exact zero modes and has been widely used as a tool to extract the topological content of gauge configurations [13,14].
Alternatively, one can use a more standard fermion discretization, either Wilson or staggered based. However, in this case zero modes are non-exact, or do not have a well defined chirality, or both. The counting is not well defined and, if one tries to define Q L as the trace of the discretized γ 5 operator, one needs again to take into account proper renormalizations [15][16][17][18]. This, however, is not an obstruction, and the method based on spectral projectors relies exactly on this strategy. Indeed, in Ref. [19,20] spectral projectors have been used to obtain a theoretically well-posed definition of the continuum topological susceptibility, which is also easily adaptable for numerical simulations on the lattice. In particular, the method has been derived for Wilson fermions and successfully tested both in pure Yang-Mills [21,22] and in full QCD [23].
All the methods exposed above, either fermionic or gluonic, are theoretically well founded or designed so as to match the correct definition of homotopy classes when these become well defined: as a matter of fact, all methods provide consistent results when the continuum limit is taken. The question about which of the methods one should adopt can then be answered based on two considerations: numerical convenience, i.e. the computational effort required by a given definition of Q L , and the magnitude of residual corrections to the continuum limit.
The second issue can be particularly relevant for numerical simulations involving light dynamical quarks. Actually, in this case most of the lattice artifacts stem from the discretization of the fermion determinant: the presence of zero modes should suppress configurations with non-zero topological charge, eventually leading to the absence of θ-dependence in the limit of massless quarks; however, a lattice discretization with non-exact chiral properties typically leads to a less efficient suppression, because of large would-be zero modes, thus leading to somewhat larger values of the topological susceptibility at finite lattice spacing. This problem can make the approach to the continuum limit particularly difficult, both at zero and finite temperature [24][25][26][27][28], making it necessary to perform simulations at lattice spacings much smaller than those usually adopted in quenched simulations. A possible heuristic solution adopted in the recent literature has been to reweight gauge configurations by hand, according to the lowest eigenvalues of the dynamical fermion operator [26] Even if the above problem is related to the discretized path integral measure, rather than to the discretized observable, it is not inconceivable that the choice of a proper fermion discretization for the topological charge could ameliorate the convergence to the continuum, especially if the discretization matches the one adopted in the measure. This possibility is actually supported by a recent study [23], investigating full QCD with twisted mass Wilson fermions, in which strongly reduced lattice artifacts are observed for the zero-temperature topological susceptibility if a definition based on twisted mass spectral projectors is adopted, instead of other standard gluonic definitions.
We can now come to the main point of our study: we would like to extend the definition of topological quantities based on spectral projectors to the case of staggered fermions, the main motivation being to adopt it in ongoing lattice investigations of θ-dependence in full QCD with staggered fermions [28]. Most of our discussion will focus on how to properly define and renormalize the definition of the topological susceptibility based on spectral projectors in the case of staggered fermions; we will also present some numerical results which however, given the goals of this paper, will be limited to measurements taken on quenched ensembles; the case of full QCD ensembles will be treated separately in an upcoming work. In addition, we will also show how the spectral projectors method can be exploited to define and evaluate cumulants of the topological charge higher than just the topological susceptibility.
The paper is organized as follows. In Section 2, after a brief review of the method for Wilson fermions, we extend the spectral projectors method to the case of staggered fermions, deriving spectral expressions for the topological susceptibility and for all higher-order cumulants; moreover, in the same section, we also describe the numerical strategies we adopted to test spectral projectors on the lattice. In Section 3 we present numerical results for the pure SU (3) gauge theory and finally, in Section 4, we draw our conclusions and discuss future perspectives. In this Section, after a brief review of the main ideas underlying the method of spectral projectors for Wilson fermions, we show how they can be extended to the staggered case, obtaining a similar expression for the topological susceptibility. We also discuss about the extension of the method to higher order cumulants and about a practical way to fix the cut-off scale adopted in the method.
A. Topological susceptibility via spectral projectors: the Wilson case As for other definitions of topological charge based on the index theorem, the starting point is to write it in terms of the trace of the γ 5 operator, Q 0 = Tr{γ 5 }. When the trace is taken over eigenvectors of the lattice Wilson fermion operator, this definition is subject to a multiplicative renormalization, because chiral symmetry is explicitly broken by Wilson fermions. In particular, making use of non-singlet chiral Ward identities (see Ref. [15] for more details), one can show that the renormalized charge can be expressed as [19,20] where Z (ns) S and Z (ns) P are, respectively, the renormalization constants of the non-singlet scalar and pseudo-scalar fermionic densities The correct renormalization factor of the charge can be easily obtained from its bare expression once the renormalization constants of the densities are chosen according to the non-singlet Ward identities written for Wilson fermions (for further details about this topic, we refer to Ref. [15,16]). Besides, note that the ratio Z (ns) S /Z (ns) P is different from 1 at finite lattice spacing because the Wilson operator D W explicitly breaks chiral symmetry [16].
The renormalization constants Z can be obtained by a variety of non-perturbative means. In the spectral projectors approach, one writes directly their ratio in terms of the so-called "spectral sums": where since spectral sums can be expressed in terms of density chains, whose renormalization properties are known [20] Note that spectral sums (7) and (8) lead to pseudo-scalar density chains (9) and (10) because of the γ 5 -hermiticity of the Wilson operator: γ 5 D † W γ 5 = D W . Eq. (6) holds for high enough values of k and l, but also if the inverse powers of D † W D W are traded for a generic, fast-decreasing function f (D † W D W ) (see Ref. [20] for more details). Choosing the Heaviside function f (x) = θ(M 2 − x), one can easily evaluate the traces in the spectral basis and obtain where P M is the orthogonal projector on eigenspaces of the Wilson operator with eigenvalues |λ| ≤ M Analogous considerations can be applied to the fermionic definition of the bare charge Q 0 , which can be rewritten in terms of a spectral expression as well: in the continuum, it would suffice to project just on the kernel of the operator, however this is of course not true at finite lattice spacing and for a fermion operator with non-exact zero modes. Finally, one can write the renormalized definition of the lattice topological susceptibility via spectral projectors as follows: This definition presents only O(a) or O(a 2 ) corrections (depending on the explicit discretization) if the cut-off mass M is properly tuned (i.e. its renormalized value is kept fixed) as the continuum limit is taken [19][20][21]. An important point, which is worth stressing, is that, because of the fast decreasing behavior in the ultraviolet of the functions appearing in the spectral sums (and in particular of the projector P M ), the above expressions are free of short distance singularities, so that no further renormalizations are needed apart from the multiplicative one. In particular, no additive renormalization appears for the topological susceptibility, contrary to what happens for the standard gluonic definition because of contact terms. In this sense, the spectral projectors method has some analogies with filtering methods [29][30][31], in which a projection onto the eigenspace of the lowest eigenvectors of the Dirac operator is used as a smoothing technique.
B. Topological susceptibility via spectral projectors: the staggered case A bare version of the index theorem can be written for the staggered Dirac operator D st by just taking into account that, in the continuum, it describes 2 d/2 degenerate flavors of dynamical fermions, where d is the space-time dimension, so that the number of zero modes corresponds to 2 d/2 times the topological charge. Therefore, one can start from the bare definition Q 0st = (−2) −d/2 Tr{Γ 5 }, where Γ 5 is the staggered version of γ 5 (more precisely Γ 5 → γ 5 ⊗ Id in the continuum limit, see, e.g., Ref. [17] for an explicit expression). Chiral symmetry is partially broken also for staggered fermions, so that Q 0st renormalizes multiplicatively as in the Wilson case, however the renormalization constants are different, since the breaking pattern is not the same. Indeed, the staggered lattice action is invariant under a remnant of the chiral symmetry, where Γ 55 = γ 5 ⊗ γ 5 . We refer the reader to Refs. [17,18] for a detailed discussion of the anomalous Ward identities for staggered fermions. The final result for the renormalized staggered charge, which has been obtained writing the Witten-Veneziano equation starting from the renormalized singlet axial Ward identity, is the following where the constants Z (s) S and Z (s) P appear in the inverse order with respect to Wilson fermions, and refer respectively to the scalar and pseudo-scalar flavor-singlet (compared to non-singlet in the Wilson case) bare densities, S 0 and P 0 : meaning that the corresponding renormalized quantities read 1 In the staggered case the ratio Z and 1 Note that our notation differs from the one employed in [17,18]. The bare singlet scalar and pseudo-scalar densities are written as S 1 and P 1 , cf. Eqs. (2.5) and (2.6) in [17], and their renormalization constants are written in terms of the quark mass one m R = Z −1 m m as Z [17] and also in [18]. through the ratio We note that also in this case the ratio of spectral sums is inverted with respect to Eq. (6) for Wilson fermions. This is due to the fact that we are dealing with singlet, rather than non-singlet, densities.
Following the same line of reasoning of Ref. [20], one can trade also in this case the inverse powers of D † st D st for a fast-decreasing function, in particular for P M , where now P M is the orthogonal projector on eigenspaces of the Dirac operator with purely imaginary eigenvalues −iλ such that λ 2 ≤ M 2 . That leads to the following expression for the ratio and finally, using also in this case a spectral projector definition for the bare topological charge we obtain the following expression for the topological susceptibility of staggered fermions in d dimensions: which coincides with Eq. (14) apart from the factor 2 −d , related to the taste degeneration, as previously explained. Also in this case, the same considerations apply regarding the fast decreasing behavior of the projector in the ultraviolet, leading to the absence of short distance singularities and additive renormalizations. As for Wilson fermions, the cut-off mass M appears as a free parameter of the definition. If the zero modes were exact, one could take M arbitrarily small. Exact zero modes are obtained for the overlap version of the staggered operator [32,33], however, for the standard staggered operator D st they are shifted by lattice artifacts because of the explicit breaking of the chiral symmetry [18], so that one must extend the sum up to a certain cut-off eigenvalue M , keeping its renormalized value fixed as the continuum limit is approached (see Subsection 2 D for more details).
C. Higher-order terms of the θ-expansion via spectral projectors
The θ-dependence of the vacuum energy (free energy) density can be parametrized around θ = 0 as follows [34]: where the b 2n coefficients, which parametrize the corrections to the quadratic behavior of f (θ), are defined as: where Q k c denotes the k th -order cumulant of the probability distribution P (Q).
These quantities can be computed in terms of spectral projectors as well, exploiting in particular the fact that, because of the absence of short distance singularities, only multiplicative renormalizations have to be taken into account. In particular, following the same line of thoughts of the topological susceptibility, it is easy to prove the following general expression The above expression has been written explicitly for the case of staggered fermions but, of course, the final expression holds for Wilson fermions as well, after omitting the factor 2 −dn , again related to taste degeneracy.
D. Numerical implementation and remarks on the choice of the cut-off mass M
In the case of Wilson fermions, different strategies have been adopted in the literature for the evaluation of the traces appearing in Eq. (14), either by means of noisy estimators [21,22], or by an explicit computation, configuration by configuration, of all relevant eigenvectors of the Dirac operator entering the traces [23]. In our numerical implementation we have followed the second strategy, i.e. we evaluated the traces appearing in Eqs. (24) and (27) expressed the projector P M through the eigenvectors of D st : limiting the sum over eigenvalues up the threshold λ max = aM . The relevant quantities entering the expressions for the topological susceptibility and the higher order cumulants are then where ν(M ) is the number of eigenvalues with |λ| ≤ aM .
As already explained in Subsection 2 B, the choice of the cut-off mass M is irrelevant in the continuum limit, since index theorem states that only zero-modes contribute to topology. However, corrections to the continuum limit do depend on it, and it is possible to show that lattice artifacts are O(a 2 ) if the renormalized value of the cut-off mass, M R , is kept fixed as the lattice spacing is varied [17,20]): where χ is the continuum value of the topological susceptibility. Therefore, one has to tune M as a function of a in order to keep M R fixed. Most of the following discussion applies specifically to the case of staggered fermions, for which the mass renormalizes as follows [17] where Z (s) S is the renormalization constant of the singlet scalar density mentioned above. This quantity is not accessible separately in terms of spectral sums, however in many lattice studies it is already known by other means. For instance, in numerical simulations of full QCD performed on a line of constant physics, one already tunes the bare quark masses as a function of the lattice spacing so as to keep the physics, hence the renormalized quark masses, unchanged: in these cases it will suffice to keep the ratio of M to any of the bare quark masses unchanged as the continuum limit is approached.
However, in the present study we consider as a numerical test-bed the case of the pure gauge theory at zero temperature, for which the strategy above cannot be applied. In this case, trying to avoid a direct computation of Z (s) S for each lattice spacing, we have devised the following strategy. The number of eigenmodes which are found below a given threshold scales proportionally to the total lattice volume, i.e. the density of eigenmodes with |λ| < M , ν(M )/V , is a constant as the thermodynamical limit V → ∞ is approached. Moreover, if the renormalized threshold M R is kept fixed, the density of eigenmodes is expected to be independent of the lattice spacing. This is supported by leading order chiral perturbation theory and the Banks-Casher relation: in the large-volume and chiral limit, and for small enough M , one has [20] where Σ is minus the chiral condensate in the thermodynamic and chiral limit, and we have used the fact that Z (s) S is the renormalization constant for both the singlet scalar density and the inverse mass, so that ΣM is a renormalization group invariant quantity. Therefore, our prescription in the following will be to keep the bare quantity ν(M ) /V fixed (with the spacetime volume expressed in physical units) in order to maintain M R constant as the lattice spacing is changed.
NUMERICAL TESTS IN THE PURE GAUGE THEORY
In order to test the definition of topological quantities via staggered spectral projectors, we have considered the pure SU (3) Yang-Mills theory. Configurations have been generated using the standard Wilson plaquette action for N c = 3: and a standard local algorithm consisting of a 4:1 mixture of over-relaxation and over-heatbath. For simulations at zero temperature, we have considered 4 different lattice spacings, corresponding to β = {5.9, 6.0, 6.125, 6.25}, and symmetric lattices L × L × L × L with L in the range 1.2 -1.8 fm (see Table I), which for the pure gauge theory is large enough to ensure the absence of significant finite size effects, in particular for topological quantities. In the following we will express physical quantities, as well as the lattice spacing, in terms of the Sommer parameter r 0 ≃ 0.5 fm.
In all cases, we have collected 300 well decorrelated configurations, on which topological quantities have been measured both by staggered spectral projectors and, in order to make a comparison, with a standard gluonic definition of the topological charge. We note that statistics are not large, because the main purpose of our numerical simulations is to test the staggered definition of spectral projectors and not to make a precision study about topology in the pure gauge theory.
Concerning the gluonic definition, in this work we adopted the clover discretization: As for the smoothing method, we decided to apply the standard cooling procedure, performing 80 cooling sweeps for each configuration (the topological susceptibility was stable already after 30 sweeps). The cooled topological charge has been further rounded to the closest integer following the procedure described in Refs. [35,36] and then used to compute the gluonic definition of topological susceptibility via A. Spectral determination of χ at T = 0 To start with, in Figure 1 we show the topological susceptibility χ SP obtained via spectral projectors for β = 6.25 and different values of the bare cut-off mass M , comparing it with the gluonic determination on the same sample of configurations. We observe an approximate plateau in a wide range of M , where χ SP is in good agreement with the gluonic definition. Such a plateau is not required a priori, however it is reasonable to expect it: the cut-off mass M filters away fluctuations at the UV scale, in particular M −1 can be viewed as the analogous of the smoothing radius for smoothing techniques, so that the appearance of the plateau is the signal of a well defined separation between the UV scale and the physical scale of topological excitations.
In order to extrapolate χ SP towards the continuum, we considered determinations at fixed values of the renormalized mass M R . To keep M R fixed, we measured the dependence of ν(M ) /V on M so that, once fixed a particular value of the mode density, we could find, for each β, the value of M corresponding to the same renormalized mass M R . Fig. 2 illustrates this procedure for two different values of M R employed for the continuum extrapolation. In Table I we report, for each β, the corresponding value of the lattice spacing, the volume in lattice units and the measures of the topological susceptibility obtained with spectral projectors and with the gluonic definition.
As shown in Table II, the continuum value of χ obtained with spectral projectors is independent of the choice of M R and well compatible with the gluonic determination within the errors. For the sake of completeness, we also report determinations of χ obtained by other fermionic methods, in particular using the overlap operator and Wilson spectral projectors. They all agree, within errors, with our staggered spectral determination. Fig. 3 shows the extrapolation towards the continuum both for χ SP and χ gluo . Lattice artifacts have a slight dependence on the cut-off mass M R , which is however well contained within errors and comparable in magnitude to that affecting the gluonic definition. This is similar to what happens Table I: Determinations of χSP and χ gluo for each β. The lattice spacings in units of the Sommer parameter r0 were taken from Ref. [37]. The values of the renormalized cut-off masses M1 and M2 correspond, respectively, to r 4 0 ν /V = 1 · 10 −3 and 3 · 10 −3 . Assuming Eq. (34) and the values of r0 and ΣR measured respectively in [38] and [39], their values in physical units are M1 ≃ 33 MeV and M2 ≃ 98 MeV.
in the case of Wilson spectral projectors [21,22]. Table I. B. Spectral determination of b2 at high T The next-to-leading coefficient in the θ-expansion of the free energy energy is (see Eqs. (25) and (26)) The gluonic discretization simply yields: Instead, from Eq. (27) we get the staggered spectral expression: The measure of b 2 at zero temperature requires in general quite large statistics, because it is necessary to detect deviations from gaussianity of the topological charge distribution P (Q), which are small [4,35,36,[40][41][42] and become less and less visible as the lattice volume is increased. For this reason, we decided to test the numerical determination of b 2 via spectral projectors in the hightemperature, deconfined phase of the SU (3) pure gauge theory, since in that regime its value is larger than in the T = 0 case, approaching the prediction b 2 = −1/12 from the Dilute Instanton Gas Approximation (DIGA), while at the same time the width of the distribution (proportional to the topological susceptibility) is smaller [43]. We have considered, in particular, a determination at β = 6.305 on a 30 3 × 10 lattice, corresponding to a temperature T ≃ 338 MeV ≃ 1.145 T c , for which a determination of b 2 by the gluonic method has been already reported in Ref. [43]. For a finite temperature implementation of the spectral projectors method, an ambiguity could emerge as to whether the first ratio in Eq. (40), corresponding to the multiplicative renormalization (Z (s) P /Z (s) S ) 2 , should be computed in the finite T simulation or instead at zero T . In principle, renormalization constants should be independent of infrared (IR) conditions such as the temperature scale. In order to check that, we have computed the ratio both from the finite temperature simulation and from a dedicated simulation on a symmetric 30 4 lattice at the same value of β: results are shown and compared in Fig. 4, where it clearly appears that, apart from the lowest values of M , for which the sensibility to IR conditions is large, the two determinations are in reasonable agreement with each other.
Finally, in Fig. 5, we show results obtained for b SP 2 , and using the zero temperature renormalization constants, as a function of the bare cut-off mass M . In this case we do not fix a particular value of M , since we have data at a single value of β, hence we do not aim at performing the continuum extrapolation; however we notice that results are in good agreement, over a wide range M , with the gluonic determination of b 2 performed on the same configuration sample, as well as with the determination of Ref. [43] with the same β and lattice size (−12 b 2 = 1.10 (7)).
DISCUSSION AND CONCLUSIONS
In this work we have defined the extension of the spectral projectors method to the case of staggered fermions. Despite the different patterns of chiral symmetry breaking, the final formula for the topological susceptibility, Eq. (24), turns out to be practically identical to the one for Wilson fermions, when the proper staggered discretization of the γ 5 operator and the fourfold degeneracy of staggered fermions are taken into account. Moreover, the method has been extended to all higher-order cumulants of the topological charge distribution, which enter the Taylor expansion in θ of the free energy density. The method has then been tested in the pure SU (3) gauge theory, both at zero temperature and, for the fourth order cumulant, at finite T , with results in agreement with previous results in the literature obtained by other fermionic or gluonic definitions of the topological charge.
Corrections to the continuum limit turn out to be of the same order of magnitude as those observed for the gluonic | 6,648.6 | 2019-08-30T00:00:00.000 | [
"Physics"
] |
Distributed Sensing Network Enabled by High-Scattering MgO-Doped Optical Fibers for 3D Temperature Monitoring of Thermal Ablation in Liver Phantom
Thermal ablation is achieved by delivering heat directly to tissue through a minimally invasive applicator. The therapy requires a temperature control between 50–100 °C since the mortality of the tumor is directly connected with the thermal dosimetry. Existing temperature monitoring techniques have limitations such as single-point monitoring, require costly equipment, and expose patients to X-ray radiation. Therefore, it is important to explore an alternative sensing solution, which can accurately monitor temperature over the whole ablated region. The work aims to propose a distributed fiber optic sensor as a potential candidate for this application due to the small size, high resolution, bio-compatibility, and temperature sensitivity of the optical fibers. The working principle is based on spatial multiplexing of optical fibers to achieve 3D temperature monitoring. The multiplexing is achieved by high-scattering, nanoparticle-doped fibers as sensing fibers, which are spatially separated by lower-scattering level of single-mode fibers. The setup, consisting of twelve sensing fibers, monitors tissue of 16 mm × 16 mm × 25 mm in size exposed to a gold nanoparticle-mediated microwave ablation. The results provide real-time 3D thermal maps of the whole ablated region with a high resolution. The setup allows for identification of the asymmetry in the temperature distribution over the tissue and adjustment of the applicator to follow the allowed temperature limits.
Introduction
Thermal ablation is a minimally invasive therapy that has been widely used for the treatment of tumors in different tissues, including liver, kidney, lung, bones [1], thyroid [2], and brain tissue [3]. The objective of the hyperthermic ablation is to destroy cancer cells by delivering heat directly to the tumor tissues using a needle-like applicator [4]. There are different types of thermal ablation techniques depending on the source of energy used to produce heat, such as radio-frequency (RF), microwave (MW), laser, and high-intensity focused ultrasound (HIFU) ablations [1]. In RF ablation, frictional heat is produced by an oscillating electrical current, which flows in the circuit created by two electrodes (on the applicator tip and on the skin) [1,[5][6][7]. The MW ablation is based on an increase in the kinetic energy of the water molecules inside the tissue, which is induced by an electromagnetic field [4,[8][9][10]. The laser ablation is achieved by the application of a laser to the tumor in order to produce light energy, which is then converted into heat [1,11,12]. The working principle of HIFU ablation is based on the application of ultrasound waves with a high intensity to heat the target tissue [1,13,14].
Nowadays, many researchers are investigating the application of nanoparticles in the localized thermal ablation procedure [15][16][17]. In particular, gold nanoparticles are actively used due to their bio-compatibility, bio-inertness, and unique tunable optical and electric properties, which are governed by the localized surface plasmon resonance phenomenon (LSPR) [18]. Due to LSPR, gold nanoparticles can absorb the scattering light and produce heat locally, which faciliatates a temperature increase at the desired area in a short period of time, preventing damage to surrounding healthy cells [19,20]. The size of the nanoparticles plays a crucial role in the efficiency of the ablation procedure and a size between 5-60 nm is widely used for thermal ablation therapy according to the cellular uptake and circulation in blood. The highest cellular uptake and higher LSPR effect observed for gold nanoparticles of size 20-60 nm [21], while smaller nanoparticles demonstrate better removal from the body [22].
The behavior of the tissue exposed to thermal ablation depends on the applied temperature. The tissue responds to a temperature of 41 • C by generating protective heat shock proteins, which increase the thermal resistance of the tissue in order to prevent thermal damage [1]. When the temperature is further increased to 42-46 • C, the tissue is more susceptible to irreversible damage, but the cells can still survive [23]. Temperatures higher than 46 • C cause cells death (the higher the temperature, the faster the death occurs). When the temperature of tissue reaches 60 • C, the plasma membrane melts, leading to almost instant cell death [1]. At temperatures greater than 105 • C, the tissue starts boiling, vaporizing, and carbonizing. In order to perform thermal ablation safely, the temperature must be maintained in the range of 50-100 • C throughout the process [23]. Therefore, it is crucial to use an accurate real-time temperature monitoring technique during the thermal ablation procedure.
Currently applied methods of temperature measurement can be classified into two groups: invasive techniques, which require insertion into the tissue, and non-invasive alternatives based on the imaging systems [24,25]. One of the invasive methods is based on the implementation of thermocouples, in which the sensing part is represented by two metallic wires joined in two junctions [25][26][27]. The main drawbacks of the thermocouples are an influence of the metallic components on the temperature propagation inside the tissue and the inability to measure temperature change over the entire ablated region [28]. Non-invasive methods are achieved by the use of medical imaging techniques based on computer tomography (CT) [29,30], ultrasound imaging [31,32], magnetic resonance tomography (MRT) [33,34], and others. The major advantages of the medical imaging methods are their non-invasiveness and ability to monitor temperature over the entire ablated region. Their limitations include a need to expose a patient to X-ray radiation for CT and the expense of MRT and CT scanners [24,28].
Another minimally invasive method used in thermal ablation is based on the application of fiber optic sensors (FOS) [24]. FOS became an important alternative for the aforementioned classical temperature monitoring techniques due to their miniature form factor, bio-compatibility, fast temperature response, and high resolution [28]. FOS sensors have no influence on the temperature propagation due to low heat conductivity, they are MR-compatible and can provide a cost-effective solution for thermal ablation application [35]. One of the popular FOS technologies used for this application is the Fiber Bragg Grating (FBG) sensor. FBG works on a particular wavelength, called the Bragg wavelength, which shifts proportionally to the applied temperature or strain. Several FBGs with different Bragg wavelengths can be incorporated into a single fiber by wavelength division multiplexing, thus, achieving a multiple-point sensing technology [35]. As a result, researchers have achieved real-time temperature monitoring with a spatial resolution of 10 mm [36], 6.5 mm [37], and 5 mm [38,39]. The spatial resolution can be further extended by a distributed sensing using a standard single-mode fiber (SMF). The operating principle of the distributed sensing is based on the Rayleigh backscattering, which occurs along the entire length of the fiber due to its small interior imperfections. The backscattered spectrum is a property of each fiber, which shows responses to the external stimulus, including temperature changes. Thus, temperature change at each point of the fiber can be retrieved by analysing the spectral shift of the backscattered signal [40]. The spatial resolution of the distributed sensing achieved by Gifford Beisenova et al. have presented an approach for the multiplexing of the distributed sensing with the use of four high-scattering MgO-doped fibers as the sensing fibers, which are spatially separated by the SMF regions [28]. The setup has been applied for two-dimensional temperature monitoring of RF-ablated tissue with a pixel size of 2.5 mm × 5.0 mm [28]. In this work, the liver phantom of size 16 mm × 16 mm × 25 mm was exposed to MW ablation enhanced by gold nanoparticles of size 10 nm. According to Jelbuldina et al. [42], nanoparticle-mediated ablation in comparison with pristine ablation can dramatically increase the ablated region (even to more than double of the diameter) due to the variation of the thermoelectric properties of the tissue; however, the ablation shape is more irregular. Therefore, nanoparticle-enhanced ablation requires more precise temperature monitoring over the whole ablated region in three dimensions. Three-dimensional temperature measurement is achieved by increasing the number of multiplexed sensing fibers to twelve and arranging them around a heating applicator in two levels (inner and outer squares). A specific scaffold for precise insertion of the twelve fibers into the phantom has been designed for this experiment, which is essential to mitigate positioning errors. The fibers have been positioned 8 mm from each other and the measurements along the fibers have been taken every 2 mm, thus, the voxel size achieved by this setup is 2 mm × 8 mm × 8 mm.
Experimental Setup
As can be seen in Figure 1, the experiment has been performed on a swine liver phantom ablated by the MW generator (Leanfa Hybrid RF/MW generator). The liver has been pierced with twelve sensing optical fibers connected to the Optical Backscattering Reflectometry (OBR) interrogator (Luna OBR 4600, Luna Inc., Roanoke, VA, USA) to achieve 3D temperature monitoring.
Preparation of the Nanoparticle-Doped Liver Phantom
In the experiments, the thermal ablation has been enhanced by doping the tissue with the nanoparticles. Gold nanoparticles (GNPs, Sigma Aldrich, St. Louis, MO, USA) have been used because they have better stability in comparison with other nanoparticles and can considerably increase the heat above physiological temperature [43,44]. The stock solution of spherical gold nanoparticles of 10 nm in diameter stabilized in citrate buffer was dispersed in 0.2% agarose in a 1:1 ratio. The agarose solution was prepared according to the protocol done by Su et al. [45]. Finally, 100 µL of diluted nanoparticles were taken from the prepared stock solution and administered ex vivo onto the surface of the tissue using a pipette. The thermal ablation procedure was conducted immediately after the injection of nanoparticles in order to prevent the leakage of nanoparticles. The nanoparticles were diluted in the agarose solution, which was not very dense in order to implement this technique further in vivo trials when the nanoparticles can be injected intravenously.
Preparation of the Multiplexing Setup for 3D Temperature Monitoring
Temperature monitoring of the thermal ablation has been conducted with the OBR instrument, the working principle of which is based on Rayleigh backscattering. The instrument measures the scattering of light from each infinitesimal section of the fiber and produces a spectrum of backscattered light as a function of position along the fiber (e.g., Figure 1b) [46][47][48]. The OBR instrument calculates the spectral shift of the fiber exposed to the external stimulus (temperature in this case) by dividing the spectrum into small Sections (5 mm in this experiment). Each section is cross-correlated with the corresponding section of the reference spectrum taken before the application of heat to the fiber. The spectral shift is converted to the wavelength shift, which is proportional to the applied temperature change [46]. The resolution of the measurement depends on the sensor spacing parameter, which has been set to 2 mm in this experiment (see Figure 1c).
The OBR equipment does not allow multiplexing because of the overlapping of the scattering patterns from several SMFs, but 3D temperature measurement requires multiple sensing fibers. Therefore, the high-scattering fibers were developed by doping the core of SMFs with MgO nanoparticles. The fabrication details and performance characteristics were discussed by Blanc et.al. [49,50]. MgO-doped fibers were spliced with SMF fibers with different lengths so that the length of each SMF is 1-2 cm longer than the length of the previous SMF plus MgO-doped fibers combined. Twelve MgO-doped fibers with SMF pigtails were connected to Luna OBR using a network of five couplers, as shown in Figure 1c. The scattering spectrum, given in Figure 1b, shows that twelve MgO-doped fibers with the scattering level in the range from −115 dB to −90 dB were separated by the lower scattering SMF regions in the range of −130 dB to −123 dB. The difference in the scattering levels of two types of fibers allows the regions corresponding to each of the twelve MgO-doped fibers to be clearly distinguished and, thus, temperature measurements to be obtained from each of them.
In order to ensure that the wavelength shift recorded by OBR was proportional to the temperature applied to the twelve sensing fibers, the temperature calibration experiment was conducted. The twelve MgO-doped fibers were placed inside the water bath, together with a standard FBG sensor used as a reference. The water bath was placed on the heating plate, and its temperature was gradually increased from 21.8 • C to 60.8 • C. Each 3 • C, the temperature of the water was measured by FBGs connected to Micron Optics (si255, HYPERION), and simultaneously, the backscattering of the twelve fibers was recorded. Figure 2 illustrates the wavelength shift of the spectrum of all 12 fibers over the temperature recorded by FBG. According to the calibration results, the sensitivity of the setup is 10.25 pm/ • C and the relationship is indeed linear.
Thermal Ablation Experiment
The nanoparticle-doped liver phantom was placed in the scaffold consisting of two plastic plates with dimensions of 100 mm × 100 mm × 5 mm. The plates were cut from a plexiglass sheet with a CO 2 laser cutter machine (Epilog Fusion M2). The distance between the plates was adjusted to 25 mm and the liver was fixed inside the scaffold (see Figure 1d). The ablation was conducted by inserting an applicator of the MW generator into the middle of the liver through a central hole of the scaffold with a diameter of 6 mm. The applicator has a conic-shaped active electrode with a height of 1.0 cm. The MW generator was set to a power of 70 W and a frequency of 2.45 GHz for 70 s and then switched off. The temperature was monitored by twelve sensing fibers located around the applicator. The fibers were arranged, as shown in Figure 1e, in two levels: fibers 1-8 were located in the outer layer, further from the applicator, and fibers 9-12 were located in the inner layer, which was closer to the applicator. The fibers were inserted into the liver by placing a medical needle (21 G, Balton, Poland) inside the holes of the scaffold, passing each fiber through the needle, and then removing the needle. Temperature monitoring was conducted for 140 s in order to record the temperature during both the heating and cooling of the tissue. In total, 144 sensing points along the twelve fibers were used to monitor the temperature of the 16 mm × 16 mm × 25 mm volume of the liver. The resolution along the fibers was 2.0 mm (sensor spacing along each fiber) and perpendicular to the fibers, was 8.0 mm (distance between the fibers).
Results and Discussion
Thermal maps (temperature change over the ablated region over time), shown in Figures 3-5, were obtained by interpolating the data from all twelve fibers using shading interp function in MATLAB, which resulted in such smooth temperature maps. Inner fibers (9)(10)(11)(12) are shown in Figure 3 and outer fibers (1-8) are represented in Figure 4. The general pattern is common for all fibers: the temperature increased during the first half of the experiment because of the transmitted MW ablation energy and then slowly decreased after the thermal ablation ws stopped. Moreover, the highest temperature was detected by all of the fibers at their middle sensing regions (1.0-1.5 cm along y-axis) because the applicator tip was located in the middle of the scaffold. Inner fibers (9-12) detected a temperature increase earlier because of their closer location to the applicator tip. For example, inner fibers achieved a temperature of 90-100 • C after 40-50 s of ablation, while outer fibers detected this temperature after 50-60 s. The heat propagated inside the tissue non-uniformly. For example, lower fibers 3, 1, and 10, were detected oto be at significantly higher temperatures than the corresponding upper fibers, 5, 6, and 11. On the right side, a slightly higher temperature was detected by upper fibers (7 and 12) than by respectful lower fibers (2 and 9). Asymmetry between the right and left sides was also present: right fibers 7, 8, and 12 were hotter than left fibers, 5, 4, and 11. The hottest region was the bottom-right corner and the coldest one was the upper-left corner. One of the possible reasons for such asymmetry is nonuniform temperature propagation over the tissue, which was amplified by the uneven distribution of gold nanoparticles inside the tissue. As previously mentioned, the use of the gold nanoparticles in thermal ablation results in a larger temperature increase and it is possible that the liquidised nanoparticles travelled to the bottom of the tissue during the procedure. This can explain the hotter lower regions of tissue. Another reason is a possible small shift of the applicator from the center. Both situations can occur during the real thermal ablation procedure. Therefore, it is important to monitor the temperature over the entire volume of ablated tissue. Three-dimensional temperature monitoring can accurately identify the spatial asymmetry, which can be used for reconstruction of the ablated tissue. It can serve as a visual tool for a clinician, who can adjust the applicator closer to the cooler regions during the procedure and achieve more uniform heating of the tissue. Figure 5 illustrates the caption of the 3D thermal profile of the swine liver at the 80th second (see Multimedia Data, Video S1). The video shows the temperature change over the duration of the experiment on the five selected cross-sections along the fibers and applicator (y-axis). The temperature change is shown every 10 s in order to reduce the file size, but the OBR is able to acquire the data at a frequency of 3 Hz. Each plane was created by interpolating the average temperature of four fibers located closest to the applicator and temperatures of the other eight fibers. The temperature increases throughout the tissue for the first half of the experiment and achieves a maximum temperature at approximately at 70-80 s. After that, the entire tissue started to gradually cool down because the MW generator was switched off. Since the applicator tip was located in the middle of the scaffold, the third cross-section, which was located on the y = 1.3 cm plane, experienced the highest temperatures during the whole experiment. This can also be seen in Figure 6, which illustrates the average temperature of all five cross-sectional planes. The highest average temperature was experienced by the middle plane with the applicator tip on it, followed by two neighboring planes located at distances 0.65 cm and 1.95 cm along the y axis. We performed the experiment with gold nanoparticles with several trials to validate that results were not affected by random events. Figure 7 indicates the change in the average temperature in the five layers where the duration of the experiment was 100 s and MW was stopped at the 50th s. Both trials were identical in temperature behavior, except for the fact that the highest temperature in Figure 7 was about 60 • C, which might change with the duration of the experiment.
Conclusions
In conclusion, we propose a distributed network that is able to perform a simultaneous scan of twelve fiber sensors for three-dimensional temperature monitoring during MW ablation of the porcine liver. The multiplexing of the twelve fibers is based on the higher Rayleigh backscattering of the MgO-doped fibers compared to the standard SMFs, which act as delays for the spacing of sensing regions. The twelve sensing regions, each 25 mm in length, were inserted through a plastic scaffold, which was used to hold the liver exposed to MW ablation. MW ablation was enhanced through ex vivo injection of gold nanoparticles in order to achieve better heat propagation. The results are presented as temperature planes of the scaffold unit, with a spatial resolution of 2 mm along the length of the MgO-doped fiber and of 8 mm in perpendicular directions. The setup has 144 sensing points for a tissue with a volume of 16 mm × 16 mm × 25 mm. The obtained thermal maps can serve as a real-time guidance method for nanoparticle-mediated MW ablation so that the applicator's position or delivered power can be adjusted during the procedure to prevent tissue damage and achieve successful thermal ablation. | 4,664.8 | 2021-01-27T00:00:00.000 | [
"Physics"
] |
A Flower‐Shaped Thermal Energy Harvester Made by Metamaterials
Harvesting thermal energy from arbitrary directions has become an exciting theoretical possibility. However, an exact 3D thermal energy harvester is still challenging to achieve for the stringent requirement of highly anisotropic and symmetrical structures with homogenous materials, as well as absence of effective characterization. In this Communication, a flower‐shaped thermal harvesting metamaterial is originally promoted. Numerical simulations imply that heat flux can be concentrated into the target core and a temperature gradient turns out to be more than two times larger than the applied one without obvious distortion or perturbation to the temperature profile outside the concentrator. Temperature transitions of the actual device are experimentally measured to validate the novel structure with consistency of the simulated results with original methods. With ultraefficiency independent of geometrical size, the flower‐shaped thermal harvester facilitates multiple scale energy harvesting with splendid efficient and might help to improve thermoelectric devices efficiency in a totally new perspective.
Based on spatial structure design, artificially arranged thermal conductivities can be achieved to concentrating heat current with homogenous and isotropic engineering materials. Herein, we pioneered a 3D flower-shaped thermal energy harvester capable of efficiently harvesting heat in arbitrary directions. Temperature profile transitions were precisely measured with thermocouples. Theoretical identifications, numerical simu lations, and experiments of 2D and 3D thermal energy harvesters are presented in this Communication.
With thermal energy efficiently more than double harvested: enhance of temperature gradients and heat flux, numerical and experimental results open up intriguing possibilities in a future all-in one system, with a wide variety of potential applications to solar thermal panels, thermoelectric generator, and battery or super capacitor type systems. [20,21] As shown in Figure 1, copper was employed to fabricate the flower-shaped functional structure, and background medium was made of stainless steel. Figure 2a shows responding simulated results of the temperature profile and heat flux in 2D case. Transformative thermal streamlines indicate that energy density and temperature gradient in inner core were prominently intensified compared to the absence of concentrator. Figure 2b implies that numerical simulation was in agreement with experimental results. More specifically, the magnitudes of these temperature observables were collected by thermocouples ( Figure 2c). Distribution of temperature along the observation lines intuitively showed conspicuous transition of temperature fields within thermal concentrator, while temperature fields beyond the concentration cell maintained as original.
Harvesting thermal energy from arbitrary directions has become an exciting theoretical possibility. However, an exact 3D thermal energy harvester is still challenging to achieve for the stringent requirement of highly anisotropic and symmetrical structures with homogenous materials, as well as absence of effective characterization. In this Communication, a flower-shaped thermal harvesting metamaterial is originally promoted. Numerical simulations imply that heat flux can be concentrated into the target core and a temperature gradient turns out to be more than two times larger than the applied one without obvious distortion or perturbation to the temperature profile outside the concentrator. Temperature transitions of the actual device are experimentally measured to validate the novel structure with consistency of the simulated results with original methods. With ultraefficiency independent of geometrical size, the flower-shaped thermal harvester facilitates multiple scale energy harvesting with splendid efficient and might help to improve thermoelectric devices efficiency in a totally new perspective.
Thermal Energy Harvesting
Solar energy, ocean energy, geothermal and even the wasted heat from the industry were considered as green and reproducible energy for industrial or social requirement. [1] Therefore, how to collect and utilize this thermal energy more effectively was a significant task and challenge.
Application-oriented metamaterials, [2][3][4] exhibiting anomalous and intriguing performances in different physical field, [5][6][7][8][9][10] were promising candidates for promoting waste heat employment. Heat current was theoretically and experimentally manipulated, controlled, and processed as a medium with artificial materials. [11][12][13] Thermal cloaks were first theoretically proposed for transient protection of the object from Temperature gradient was remarkably intensified in concentration core indicating its excellent heat concentrating performance, namely, energy was harvested in the target core by respectable efficient. Therefore, a homogenous and tunable isotropic flower-shaped thermal concentrator was simply achieved with engineering materials. Figure 3, the considered space, including the inner core (0 ≤ r ≤ a) and exterior region (r ≥ b), was made of a homogenous and isotropic background material with thermal conductivity of κ 1 , while a homogenous host material with its thermal conductivity of κ 2 was employed to the flower-shaped functional structure region (a ≤ r ≤ b). Petal elements of the flowershaped functional structure were uniformly distributed with flexible quantities (m). Stable plane heat source (T 1 ) and cool source (T 2 ) were employed to the boundaries of an arbitrary direction to acquire a temperature difference ΔT = T 1 − T 2 and an applied temperature gradient Grad T 0 . To achieve the high degree of anisotropy required for superior flux concentration. [1] For 0 ≤ r ≤ a (inner core) and exterior region (r ≥ b)
As shown in
If κ 2 > κ 1 > κ air and m is large enough simultaneously, Equation (2) can be exactly satisfied. Here, frequently used engineering materials stainless steel (κ 1 = 16) and copper (κ 2 = 377) were employed to background material and host material, respectively, with m = 54, a = 10 mm, and b = 25 mm. COMSOL Multiphysics was applied to invest temperature gradient and thermal flux transitions with Equations (3) core and the energy density in the inner core was considerably enhanced by precisely manipulating the local flow of the given thermal concentrator. Meanwhile, thermal field of exterior region kept ideally undisturbed. We also found that the temperature gradient of inner core was prominently increased by 150% compared to the applied one. In case parameters a, b, and m were an ideal match, a perfect thermal concentrator with excellent geometric flexibility to scale up and down and desirable efficiency could be realized.
Thereafter, experiments were conducted to verify the simulated results. Background material, host material, a, b, and m were, respectively, in line with numerical simulations. Plane heat source and cool source were set as T 1 = 393.15 K and T 2 = 323.15 K, respectively, with thermocouples lined up along the observation line to verify the validity of temperature gradient transformation in functional regions. Figure 4b shows the temperature distribution of the actual device. The temperature gradient increased by 130% in target inner core in close proximity to simulated results, while temperature profile in the exterior region was almost constant.
Furthermore, being different from reported work, [17,19] a fair amount of flexibility of selecting proper materials including metals, polymers, and other materials was promising for applications in a variety of temperature and environment conditions. Although the optimization of temperature gradient and energy density amplification was beyond the scope of this experiment, desirable efficiency of the flower-shaped thermal concentrator could be accomplished with optimum materials and appropriate geometric size in certain cases.
The 3D flower-shaped thermal energy harvester is of perfect symmetry, thus apt to harvest thermal energy from arbitrary directions. With thermocouples to measure temperature profile precisely, experimental and numerical results mutually agree with each other well. The temperature gradients transformation implies that it opens new vistas in improving efficiency of thermoelectric devices in circumstance of constrained ZT values, the ability of a given material to efficiently produce thermoelectric power, which depends on the Seebeck coefficient S, thermal conductivity κ, electrical conductivity σ, and temperature T. Enhancing temperature difference and heat flux density in unchanged distance. With combination of a 3D flowershaped thermal energy harvester and thermoelectric devices, waste heat can be harvested, converted, and stored efficiently.
This type of thermal guiding and energy concentrator may also find applications in devices such as solar cells and biotherapy where high energy density or greater temperature gradient with uniform distribution plays a critical role. It also opens up intriguing possibilities in a future all-in one system.
We originally promoted a 3D flower-shaped energy concentrator with widely used engineering materials, which greatly facilitates the thermoelectric generator and other energy harvester, and validated it with numerical simulations as well as experiments. Engineering materials with different constant heat conduction were employed to the flower-shaped spatial structure, therefore they exhibited overall heat conduction anisotropy and consequently harvested energy in a target core. Simulations and experiments consistently demonstrated its excellent concentration efficiency with remarkable energy density increase and the temperature gradient intensifies in the target core, while temperature profile keeps undisturbed. It should be noted that with judiciously selected materials, efficiency and geometrical size could be tailored to adapt to different conditions. Educed to 2D case, that spatial structure-oriented thermal concentrator was exactly active. With its extraordinary structure-oriented and heat conduct-oriented properties, flower-shaped thermal concentrator may find potential applications in devices such as solar thermal panels, thermoelectric generator, and miniature heat therapy instruments. A combination of the 3D flower-shaped thermal energy harvester with thermoelectric generators may open new vistas in future all-in one systems. | 2,084.6 | 2017-07-27T00:00:00.000 | [
"Engineering"
] |
Recognition of human activities with wearable sensors
A novel approach for recognizing human activities with wearable sensors is investigated in this article. The key techniques of this approach include the generalized discriminant analysis (GDA) and the relevance vector machines (RVM). The feature vectors extracted from the measured signal are processed by GDA, with its dimension remarkably reduced from 350 to 12 while fully maintaining the most discriminative information. The reduced feature vectors are then classified by the RVM technique according to an extended multiclass model, which shows good convergence characteristic. Experimental results on the Wearable Action Recognition Dataset demonstrate that our approach achieves an encouraging recognition rate of 99.2%, true positive rate of 99.18% and false positive rate of 0.07%. Although in most cases, the support vector machines model has more than 70 support vectors, the number of relevance vectors related to different activities is always not more than 4, which implies a great simplicity in the classifier structure. Our approach is expected to have potential in real-time applications or solving problems with large-scale datasets, due to its perfect recognition performance, strong ability in feature reduction, and simple classifier structure.
Introduction
Activity recognition has become one of the most active topics in context-aware computing, due to its wide application prospects in industrial, educational, and medical domains [1][2][3].For instance, characterization of the activities of assembly-line workers can increase the safe reliability and improve the productivity [1]; as another example, monitoring the human activity of daily life can provide very useful information for medical diagnosis, elderly care, as well as the assessment of individuals' physical and mental conditions [2].
Early studies in activity recognition employed visionbased systems with single or multiple video cameras, which remains the most common means to date [4,5].
In general, such systems may be acceptable and practical in a laboratory or well-controlled environment.However, when the activities take place in the real-home setting or outdoors environment, the accuracy of activity recognition would be affected by variable lighting condition or the clutter disturbance [6].Wearable-sensor-based system offers an appropriate alternative to activity recognition [7][8][9][10][11], which is inherently immune to the shadow and occlusion effects.Furthermore, compared with the vision-based systems, this kind of systems would not supply additional privacy information, thus the subjects may act more naturally as in their daily life.Another major advantage in using wearable-sensorbased systems is that the cost of storing and processing data can greatly be reduced.Therefore, in this article we will focus on developing an effective approach for activity recognition with wearable sensors.
Feature dimension reduction (which can be seen as a feature selection operation) is an important and essential procedure before classification [6][7][8].A high-dimensional feature would result in the following problems: First, some features are irrelevant or redundant and cannot provide supportive information for classification (in the worst sense, inappropriately emphasizing such features may even hinder the recognition accuracy); second, the training of classifiers in a high-dimensional space would be difficult and time consuming.Therefore, it is desirable to effectively reduce the dimension of the feature set before performing the classification task.Principal component analysis (PCA) and linear discriminant analysis (LDA) are two classical techniques for data dimension reduction [7][8][9]12,13].The essence of the PCA is to find the optimal projection directions that maximally preserve the data variance.However, it does not take into account the class label, thus the 'optimal' projection directions may not give the most discriminating feature.While LDA seeks the optimal projection directions that maximize the ratio of between-class scatter and withinclass scatter.However, it cannot capture the nonlinear relationships among samples.To overcome the weakness of LDA, the generalized discriminant analysis (GDA) based on the kernel trick has been proposed in [14], which can be viewed as an extension of LDA.Actually, GDA has been proved superior to LDA in many applications such as face recognition, document analysis, and image retrieval [15][16][17].
As to the existing classification methods, most of them can be divided into two categories, depending on whether they are based on the kernel-based leaning or not.The non-kernel-based learning classification methods [for instance, the k-nearest neighbor (k-NN) method] usually give equal or comparative weight to each training sample, which may not be reasonable in some cases.While the kernel-based leaning classification methods [for instance, the support vector machines (SVM) method] try to pick out the informative samples for classification, thus these methods usually achieve higher recognition accuracy and at the same time maintain relative sparsity of the support vectors.In the last decade, the Bayesian theory was introduced into the design of classification methodology, of which the relevance vector machines (RVM) method [18] is a representative.As a Bayesian extension of the SVM, the RVM can provide posterior probabilistic output for class memberships.Furthermore, the RVM requires dramatically fewer number of relevance vectors (RVs), which means it is more suitable for real-time applications.
Extensive study has been done on human activity recognition with wearable sensors [7][8][9][10][11].Fleury et al. [7] recognized seven kinds of human daily activities.Data associated with activities were collected by the infrared sensor, the temperature and hygrometry sensor, and the wearable kinematic sensors.A feature set was extracted from the raw sensor data by PCA, and then SVM method was employed in the classification process.The cross-validation test achieved an overall recognition accuracy of 86%.Khan et al. [8] used LDA to generate feature set and artificial neural networks (ANNs) to recognize human activities with data recorded by the body-worn accelerometers.Altun et al. [9] constructed a hardware system with gyroscope, accelerometer, as well as magnetometer, and did a comparative study on human activity recognition.They considered in total seven classification methods: Bayesian decision, decision tree, the least-squares method, k-NN, dynamic time warping, SVM, and ANN, where the SVM achieved the best recognition performance in the 'leave-one-out' cross-validation process.
In this article, we put forward a new approach for human activity recognition.We employ GDA to reduce the feature dimension and then construct a multiclass RVM classifier to perform the classification task.To the best of authors' knowledge, both the GDA and RVM techniques are applied in the wearable-sensor-based system for the first time.
The rest of this article is organized as follows.Section 2 provides a briefly description of the Wearable Action Recognition Dataset (WARD) [19].The detailed information about the feature extraction and the feature dimension reduction process is presented in Section 3. In Section 4, we first provide a review of the RVM classification technique and then introduce the construction procedure for the multiclass RVM classifier.Section 5 reports the experimental results.Conclusions are given in Section 6.
Wearable action recognition dataset
In this article, we used a publicly available dataset called WARD http://www.eecs.berkeley.edu/~yang/software/WAR/ for human activity recognition, which was introduced by Yang et al. [19].To construct this dataset, 20 volunteers, i.e., 7 females and 13 males with a wide age range (from 15 to 75 years old), were enrolled to collect 13 activities.All the involved 13 kinds of activities are listed in Table 1.
The data were recorded by five custom-built sensor boards, which were attached to different body parts: two on the wrists, one on the waist, and two on the ankles.Each sensor board has been equipped with two sensors: a tri-axial accelerometer with the range of ± 2 g (m/s 2 ) and a bio-axial gyroscope with the range of ± 500°/s.Since the output data format of each accelerometer and gyroscope would have 3 and 2 dimensions, respectively, the activity signal are totally given in 25 dimensions.
Figure 1 provides instances of measured signal with three different activities related to the same subject.Subfigures in the first row show the three-dimensional data recorded by the accelerometer, while subfigures in the second row show the two-dimensional data recorded by the gyroscope located at the waist.As we expected, there are much differences in the magnitude and period of the measured signal among different activities.
Feature extraction and reduction
In this section, we first describe the data preprocessing procedure, and then present the feature extraction and data normalization process.Finally, the GDA is introduced for feature reduction.
Data preprocessing
In general, it is unnecessary to analyze the entire recorded data for activity recognition.Therefore, we divided every raw sample data into small windows before feature extraction.In order to sufficiently capture the information of human activity and be convenient for the FFT-based computation of the frequency-domain features, the window length is set to be 2 n .Four different window lengths, 32 samples, 64 samples, 128 samples, and 256 samples, were investigated, and the best recognition performance can be achieved with the window length of 256 samples.It can be explained as follows.At a sampling rate of 30 Hz, the time duration related to the four different window lengths would approximately be 1, 2, 4, and 8.5 s, respectively.Bouten et al. [20] had pointed out that the frequency of human daily activity mainly ranged from 1 to 18 Hz; therefore, the time duration of 8.5 s would cover at least several cycles of human activities and accordingly the activity recognition performance would be more reliable, i.e., shorter time duration would cover less activity cycles, thus the recognition performance would gain lower credibility.In this study, the window lengths larger than 256 samples were not taken into consideration.There are two reasons behind it: (1) for some activities, the number of samples contained in the raw sample data is less than 512.For example, the raw sample data as to The subject walks clockwise for more than 10 s Turn left (TL) The subject stays at the same position and turn left for more than 10 s Turn right (TR) The subject stays at the same position and turn right for more than 10 s Go upstairs (Up) The subject goes up a flight of stairs
Go downstairs (Down)
The subject goes down a flight of stairs Jog (Jog) The subject jog straight forward for more than 10 s
Jump (Jump)
The subject stays at the same position and jump for more than 5 times Push wheelchair (Push) The subject pushes a wheelchair/walker for more than 10 s the "Jog" activity of the test subject 5 only contains 397 samples.
(2) As shown below in Table 2, with the window length of 256 a recognition rate as high as 99.2% can be achieved by our method.Increasing the window length may slightly improve the recognition rate, but will result in a longer delay.
As a result, we set the length parameter of the window to be 256 samples in the following experiments.Then, the full data size in a truncated window would be 256 × 25, where 25 equals the dimensions of activity signal, i.e., each sensor node can provide five-dimensional activity signal, and there are in total five sensor nodes, and there are 128 × 25 data overlapping between consecutive windows.
Feature extraction
The features derived from time domain and transform domain are both used for activity recognition in most previous studies [9,21].In this study, we simply choose the features most frequently adopted by previous researchers, rather than deliberately select them.Specifically, the selected time domain features include the mean value, variance, skewness, and kurtosis, which can be expressed as follows: where N = 256, s i, n represents the nth data value in the ith dimension associated with some window.While the selected transform domain features include the magnitudes of the maximum five peaks of the resultant fast Fourier transform (FFT), as well as the magnitudes of the maximum five peaks of the resultant cepstrum coefficients, which can be calculated as follows: The six types of feature presented above can be employed for activity recognition.
Figure 2 gives an instance of signal representations both in the time and the transform domain, related to some windows for two specific activities.Figure 2a, b is the signal along z-axis related to the walking forward and the jump activity, recorded by the accelerometer located at the waist.Figure 2c, d is the resultant FFT of the signal in (a) and (b), respectively (with the maximum five peaks of FFT marked with 'O'), while Figure 2e, f shows the resultant cepstrum of the signal in (a) and (b), respectively (with the maximum five peaks of cepstral coefficients marked with 'O').
According to the above feature extraction procedure, a feature vector with 14 elements can be obtained from each window along every dimension, i.e., 4 elements in the time domain and the remaining 10 elements in the transform domain.Thus, the total dimension of the feature vectors would be 350 × 1.
Data normalization
Commonly, the selected features are heterogeneous.Directly taking the feature data acquired as in Section 3.2 for subsequent classification may lead to problems, especially when the distribution characteristic of the selected features witnesses dramatical discrepancies.Therefore, all the features should be normalized before constructing the classifiers.For simplicity, we do this study for each feature in a linear way as follows: where x j denotes the data value of jth feature before normalization, while the symbols 'max{f j }' and 'min{f j },' respectively, represent the maximum and the minimum values related to this feature throughout the whole dataset.Thereafter, all the feature values will fall into the range [0 1].
Feature dimension reduction by GDA
As described above, the GDA is a nonlinear data dimension reduction method based on kernel function learning technique, which will be used to deal with the normalized feature vectors.F, x i ↦ j(x i ), and then the classical LDA is carried out in this new feature space.
The between-class scatter and within-class scatter matrix in the feature space F are defined as: where x lk denotes the element k of the class c l and φl represents the mean of the class c l in space F: The GDA method would find the projection matrix v that maximizes the ratio: Note that explicitly carrying out the mapping j means a demanding task.Therefore, the skill of reproducing kernels has been adopted when deriving the projection matrix.Since the rank of B is no more than L-1, the upper boundary value of t is L-1.More details about the GDA are available in [14].
After performing the GDA on the normalized activity feature vectors, the dimensions of them would be reduced to n (n ≤ L-1).In our case, there are in total L = 13 activities, thus the dimension of feature vectors will be reduced to no more than 12.We have tested the performance of GDA with different reduced dimensions ranging from 1 to 12, and finally set the dimension parameter to be 12 as it can provide the best recognition performance.We also used PCA and LDA for comparison.The resultant threedimensional features with highest weights by each method are picked out and drawn in Figure 3.It can be seen that the GDA can capture the discriminate information better than and LDA, and this would be a good prognosis for the recognition performance.
RVM classification techniques
In this section, we will review the binary RVM theory, and then introduce a multiclass RVM model for solving the multi-class problem.Based on that, the multidimension activity feature vectors can be classified.
The binary RVM
RVM is originally designed for the binary classification problem.Given a training dataset of N input target pairs {x n , t n } N n=1 , where x n R m is the training sample and t n {0,1} is the target value of x n .Supposing the posterior probability of x n with t n = 1 is given by P(t n = 1|x n , w) = s{y(x n ; w)}, where s(y) is the logistic sigmoid function defined by s(y) = 1/(1+e -y ), then the posterior probability of x n with t n = 0 can be expressed as P(t n = 0|x n , w) = 1-s{y(x n ; w)}.On assumption that the input variables x n are independent of each other, the likelihood on the entire set of training samples can be calculated using the Bernoulli distribution: where w is a weight vector represented by w = [w 1 , w 2 , w 3 , ..., w N ] T .Here, we assume that w can be well described by the zero-mean Gaussian distribution with In order to find the 'most probable' weights w MP , an iterative procedure is utilized, which is based on a Laplace approximation.With a fixed value of a, the logarithm of the posterior distribution over the weight w is given by where A = diag(a 0 , a 1 , ..., a N ).When maximizing the value of above expression, the mean value of w MP and its covariance Σ MP should be where B is an N × N diagonal matrix with elements b n = y n (1-y n ), the vector y = (y 1 , y 2 , ..., y N ) T , and F is the design matrix with elements Φ ni = j i (x n ).The detail information about the derivation of the above two equations is provided in Appendix.
Consequently, the corresponding marginal likelihood can be expressed as To maximize it, the parameter a n should be updated as follows: in each iteration, where w n denotes the nth element of the estimated posterior weights w MP , and Diag n represents the nth diagonal element of the posterior covariance matrix Σ MP from Equation (15).The above procedure via Equations ( 13)-( 17) is repeated until when the preset convergence criterion is met.Up to this point, the training stage for the binary classifier is completed.In the classification stage, the test sample x will be classified to the class t {0,1}, which maximizes the conditional probability P(t|x, w).
Multiclass RVM
Traditional RVM solves the binary classification problem.However, the practical activity recognition task usually involves a multiclass task.For instance, the WARD dataset contains totally 13 kinds of activities.Therefore, the RVM must be extended to the multiclass situation.The first possible scheme is to directly generalize the RVM to the multiclass RVM as in [18].However, in this case the size of the covariance V would linearly scale with the total number of involved classes, which is a disadvantage from the computational perspective [18].The second possible solution is to consider the L-class problems as a set of two-class problems.In this case, the simplest way called 'one-versus-all' is to train L individual binary classifiers and integrate them together.The test sample x will be classified to the class t i on condition that
Experimental results
In this section, we first examine the convergence characteristic of the constructed RVM classifiers and present the classification results for the proposed approach with threefold cross validation on WARD (Section 5.1).Then, we compare the recognition performances with different feature reduction techniques and different classification techniques, respectively.We also show a comparison between our approach and other existing methods on the same dataset (Section 5.2).
Recognition performance with the proposed approach
As described in Sections 3 and 4, the feature vectors are extracted from the measured activity signal and compressed into 12 dimensions by GDA, which subsequently are classified with the multiclass RVM technique.To evaluate the recognition performance of our approach, we did threefold cross validation on the entire WARD dataset.Specifically, all the feature vectors were randomly divided into three partitions, of which one partition was retained as the validation set for testing, and the remaining two partitions were used for training.Such cross-validation process was performed for three times, so as to ensure each partition was validated.The whole procedure was repeated for ten times, and the resultant recognition rates were averaged.
During the cross-validation process, we established an RVM classifier for each kind of activity, i.e., there were in total 13 different RVM classifiers for all kinds of activities.To examine the convergence characteristic of the constructed classifiers, we monitored their marginal likelihood versus the iteration numbers.As shown in Figure 4, the likelihood of 'ST' activity (denoted by the solid line and squares) quickly converged after about 12 iterations, and those of the remained activities also show the similar tendency, which proves the consistency of the classifiers and the convenience to construct them.
Figure 5 shows the recognition results with different numbers of feature types mentioned in Section 3. It can be seen that the recognition accuracy gradually get improved as the number of employed feature types increases, which indicates that the different types of feature considered in this article can provide complementary information.As a result, all the six types of features are used for recognition in the following experiments.
The confusion matrix related to the recognition result (with all six types of feature) is given in Table 3.It can be observed that the confusion occurrences are distributed in an unbalance way.For instance, the three most confused pairs are 'Up' and 'Jog', 'SI' and 'ST', as well as 'ST' and 'SI'.The confusion rates between them have, respectively, reached 2.4, 1.45, and 1.31%, while some pairs such as 'LY' and 'Push', 'TR' and 'WR', as well as 'TL' and 'Jump' are never confused with each other.It probably can be explained as follows.The subjects do not have to move the ankles to perform both 'SI' and 'ST' activities, and at the same time the accompanying movement of waist is always quite small.Thus, the sensor nodes would provide less discriminable information for those two kinds of activities.While 'Up' is sometimes misclassified as 'Jog' mainly because both activities may involve the feet rising action.A possible solution for improving the discrimination between similar activities is to include more sensors located at knees or thighs, or deploying other kinds of sensors.For instance, sensors such as location sensors can be used to keep track of a subject's body movement.Another interesting point is that the confusion rates are not necessarily equivalent between a specific pair.For instance, though the 'Up' activity may be judged as 'Jog' with a rate of 2.4%, the 'Jog' is never mistaken as 'Up'.Such phenomenon may provide cues for further improvement on the selection of features.
Comparative evaluation
Extensive comparisons have been made to thoroughly examine the performance of the proposed approach.This section reports the comparison results.
First, we compare the recognition performances of our proposed feature reduction with two other feature reduction techniques, i.e., PCA and LDA.The evaluation is performed on the WARD dataset, and the same classifier RVM is employed.The comparison results are listed in Table 4.The GDA achieves the recognition rate of 99.2%, which is 22.9% higher compared with the PCA and 58.9% higher compared with the LDA.This consequence is as per our expectation.On one hand, the distribution of wearable data is nonlinear and complex duo to the factors such as measurement noise, outliers, and other variation.On the other hand, both PCA and LDA are linear feature reduction techniques.Thus, it is difficult for them to capture nonlinear relationship with a linear mapping.On the contrary, GDA is nonlinear extension of LDA.It can transform the original data space to a feature space by a nonlinear mapping through kernel methods, where is more likely to be linearly separable than in the original data space.Therefore, the GDA can provide more reliable and robust solution to activity recognition problem.
We subsequently compare the recognition performances between the RVM classifiers and three other popular classification techniques, i.e., k-NN, Bayesian decision, and SVM.The evaluation is also performed on the WARD dataset, and the same feature reduction technique (GDA) is employed.In general, recognition accuracy is sensitive to the parameters of classifiers.For fairly comparison, it would be desirable to adopt optimal parameters for each kind of classifier, respectively.Specifically, as to the k-NN classifier, we set k = 3 since in this article we can achieve the highest recognition accuracy under this condition; While for the RVM classifier, the Gaussian kernel is employed and the optimal bandwidth value is be found by the following simple method: we increased the bandwidth value with a constant step of 0.05 over the range of [0.05 1], and trained the RVM classifier over the whole training set.The bandwidth parameter 0.15, which maximized the classification accuracy, was chosen for the following experiments.As to the construction of the multiclass SVM classifiers, we also adopted the Gaussian kernel and the 'one-versusall' strategy, which are the same as in the multiclass RVM model.Its optimal values of two controlling parameters, i.e., bandwidth and regularizing parameter C, are also fixed with the same searching strategy as in RVM (in this article, the optimal bandwidth and regularizing parameters are set to be 0.25 and 10, [21].Although the values of N SV are larger than 70 in most cases, the values of N RV are always not more than 4, i.e., the sparsity property of the RVM is much better than the SVM.Therefore, the results in Table 5 demonstrate that RVM has remarkable advantage both in the recognition accuracy and the sparsity of RVs. To further evaluate the performance of our approach, we also employed other conventional metrics [22,23] including the precision, the recall rate, the F index and the specificity rate, which can be described as where TP (true positive) refers to the number of positive samples classified as positive; FP (false positive) refers to the number of positive samples classified as negative; FN (false negative) denotes the number of negative samples classified as positive; and TN (false negative) denotes the number of negative samples classified as negative.The comparison results regarding all these metrics are plotted in Figure 6.It can be seen that the RVM achieves the highest scores of them, which are 0.9918, 0.9903, 0.9910, and 0.9993, respectively.
Figure 7 highlights the relationship between the true positive rate (TPR) and false positive rate (FPR) related to the different classification techniques.It shows that the result of RVM is almost perfect, with the highest TPR of 99.18% and lowest FPR of 0.07%.The SVM and Bayesian decision method appear a little worse, while the k-NN method performs the worst, with the lowest TPR value of 89.04% and the highest FPR of 1.08%.
Finally, we also compare the performance of our approach with other existing methods on the same dataset.The quantitative results have been reported in Table 2.It can be seen that our approach outperforms other alternatives in terms of recognition rate.Specifically, Yang et al. [19] employed a distributed sparsity classifier to classify human activities and gave the recognition accuracy of 93.46% with all five sensor nodes, which is about 6% lower than that of our approach.They modeled the distribution of multiple action classes as a mixture subspace model and represented the test sample with linear approximation of all training samples.However, this linear representation structure may meet limitations in describing the test samples of the complex activities.Huynh [24] combined a generative model (multiple eigenspace) with SVM classifier into their activity recognition framework.Since the multiple eigenspace approach has advantages in the representing the structure of the input data and SVM has good discriminability, they achieved higher recognition accuracy than that of Yang's method, but still 2.23% lower than that of our approach.We also noted that Yang et al.'s method can be adaptive to the alteration of sensor configuration by constructing a global projecting matrix, thus it has advantages over Huynh's as well as our approach in dealing the problems if one or more sensor nodes (or sensor) failures.One possible way for us in handling these problems is to individually train the data from individual sensor, and then fuse the classification results with the valid sensors at the decision level.
It is also worth noting that the test data we use are different from that used in the study of [25].In our test data, the number of sensor nodes, the number of test subjects, and the types of activities are 5, 20, and 13, respectively, while the corresponding numbers and types in [25] are 8, 3, and 12, respectively.Therefore, our results are not compared with those in [25].
Conclusions
We put forward a novel approach for the recognition of human activities with wearable sensors, by combining the GDA and the RVM techniques.To the best of authors' knowledge, both of these techniques are applied to this domain for the first time.
As a powerful data dimension reduction method, the GDA can sharply reduce the dimension of the feature space, while maintaining the most discriminative information among different activities.Specifically, in this article the dimension of the feature vectors has even been reduced from 350 to merely 12, which can greatly speed up the subsequent training process.Meanwhile, the RVM also shows great extension flexibility to the multiclass classification problems.Experimental results on the WARD dataset demonstrated that the RVM technique not only provides the highest recognition rate, the highest TPR, as well as the lowest FPR compared with the conventional classification techniques, but also possesses much simpler classifier structure in contrary to the SVM.In conclusion, our approach would have advantages in real applications or solving problems with large-scale datasets, due to its perfect recognition performance, strong ability in feature reduction, and simple classifier structure.
Appendix: Derivations of Equations ( 14) and (15) The RVM model takes the form of a linear combination of basis function transformed by a logistic sigmoid function y(x, w) = σ w T φ (x) = 1 1 + e −w T φ(x) (20) The gradient vector of the log posterior distribution, which from Equation (13), is given by (21) where the vector y = (y 1 , y 2 , ..., y N ) T , and F is the design matrix with elements Φ ni = j i (x n ).By setting Equation (21) to zero, the mean w MP of the Laplace approximation is represented by w MP = A -1 F T (t-y).
Hessian matrix of the log posterior distribution, which from Equation (13), is given by where B is an N × N diagonal matrix with elements b n = y n (1-y n ).At convergence of the iterative reweighed least squares algorithm, the negative Hessian represents the inverse covariance matrix for the Gaussian approximation to the posterior distribution [26].Then, the
Figure 1 A
Figure 1 A set of activity signal recorded by the accelerometer and the gyroscope located at the waist.
Given a training dataset {x i } Ni=1 containing L classes, with n l samples belong to the class l (i.e., N = L l=1 n l ), the GDA operation on it consists of two steps: first, the data x i will be transformed from the original feature space R into a new one F via a nonlinear mapping j: R
Figure 2
Figure 2 An instance of activity signal representations both in the time domain and in the transform domain: (a, b) The original data in some window respectively related to the walking forward and the Jump activity, recorded by the accelerometer located at the waist along z-axis.(c, d) Resultant FFT of the signal in (a, b), respectively (with the maximum five peaks of FFT marked with 'O').(e, f) Resultant cepstrum of the signal in (a, b), respectively (with the maximum five peaks of cepstral coefficients marked with 'O').
(a) 3 -Figure 3
Figure 3 Scatter plots of the three most important features respectively picked out by PCA (up), LDA (middle) and GDA (bottom).
Figure 4
Figure 4The marginal likelihood versus iteration numbers.
Figure 5
Figure5Recognition accuracy of our approach versus number of feature types.
Figure 6
Figure 6 Evaluation of recognition performance with conventional metrics.
Table 1
Description of activities performed by each subject
Table 2
Recognition results with different feature reduction techniques He et al.EURASIP Journal on Advances in Signal Processing 2012, 2012:108 http://asp.eurasipjournals.com/content/2012/1/108respectively).Table5shows the recognition accuracies by these different classification techniques.The RVM gives the recognition rate as high as 99.2%, followed by the SVM and the Bayesian decision, respectively, with recognition rates of 97.7 and 95.9%, while the k-NN performs the worst, of which the recognition rate only reaches 88.3%.The reason behind it may be as follows: k-NN is a linear classifier, which calculates the similarity between the test sample and the training samples.Since it does not take the data distribution into account, k-NN may not be suited for dealing with noise data.As for SVM, it is based on the principle of structural risk minimization.The final classifier obtained by SVM depends only on the "borderline" samples in the training samples, i.e., support vectors (SVs).These SVs are located near the decision boundary of the classifier.This makes the SVM sensitive to noises or outliers and patterns that were wrongly classified lie near the separation hyper-plane.As for RVM, it is a Bayesian extension of the SVM.The final classifier obtained by RVM depends only on fewer samples in the training samples, i.e., RVs.Unlike SVM, these RVs are formed by samples appearing to be more representative of the classes, which are located away from the decision boundary of the classifier.Therefore, the RVM has a better generalization ability and more robust to noises or outliers.Additionally, Table5also gives the N SV and N RV which represent the number of support vectors in the SVM model and the number of relevant vectors in the RVM model, respectively.They reflect the classifiers' structural complexity
Table 3
Confusion matrix related to the ten times threefold validation by our approach
Table 4
Recognition results with different feature reduction techniques
Table 5
Recognition results with different classification techniques | 7,687.4 | 2012-05-10T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Interactions between cyclodextrins and cellular components: Towards greener medical applications?
In the field of host–guest chemistry, some of the most widely used hosts are probably cyclodextrins (CDs). As CDs are able to increase the water solubility of numerous drugs by inclusion into their hydrophobic cavity, they have been widespread used to develop numerous pharmaceutical formulations. Nevertheless, CDs are also able to interact with endogenous substances that originate from an organism, tissue or cell. These interactions can be useful for a vast array of topics including cholesterol manipulation, treatment of Alzheimer’s disease, control of pathogens, etc. In addition, the use of natural CDs offers the great advantage of avoiding or reducing the use of common petroleum-sourced drugs. In this paper, the general features and applications of CDs have been reviewed as well as their interactions with isolated biomolecules leading to the formation of inclusion or exclusion complexes. Finally, some potential medical applications are highlighted throughout several examples.
Introduction
Cyclodextrins (CDs) were discovered and identified over a century ago [1][2][3]. Between 1911 and 1935, Pringsheim and co-workers demonstrated the ability of CDs to form complexes with many organic molecules [4,5]. Since the 1970s, the structural elucidation of the three natural CDs, α-, β-, and γ-CDs composed of 6-, 7-, and 8-membered α-D-glucopyranoses linked by α-1,4 glycosidic bonds, allowed the development and the rational study of their encapsulation properties [6,7]. As their water solubility differs significantly, a great variety of modified CDs has been developed to improve the stability and the solubility of inclusion complexes [8][9][10]. Nowadays, CDs are widely applied in many fields [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28] due to their host-guest properties, their origins (produced from starch by enzymatic conversion), their relatively low prices, their easy modifications, their biodegradability and their low toxicity. Moreover, CDs are able to interact with a wide range of biomol-Scheme 1: Structure and conventional representation of native CDs. ecules opening the way for many biological applications. The majority of these researches are based on the ability of CDs to extract lipids from the cell membrane. The objective of this contribution is to focus on the potential use of natural and chemically modified CDs in the vast array of medical and biological applications.
Review
Cyclodextrins: synthesis, structure and physicochemical properties.
i) Native cyclodextrins
As mentioned earlier, the ordinary starch hydrolysis (e.g., corn starch) by an enzyme (i.e., cyclodextrin glycosyl transferase, CGTase) allows the production of the native CDs [13]. To reduce the separation and the purification costs, selective α-, βand γ-CGTases have been developed in the last two decades [29]. Nevertheless, the cheapest remains the β-CD whereas the most expensive is the γ-CD. The molecular shape of the native CDs can be represented as a truncated cone with "hydrophobic" cavity which can accommodate hydrophobic compounds (Scheme 1). In aqueous solution, the complexation is enthalpically and entropically driven. In addition, complementary interactions (e.g., van der Waals forces, H-bonds, etc.) appear between the CD and the guest. The non-polar suitably-sized guest may be bound in numerous molar ratios (e.g., 1:1, 2:1, 1:2, etc.). In all cases, the knowledge of the binding constants (K ass ) is crucial because these values provide an index of host-guest binding forces. CDs can also form exclusion complexes where the CDs are bound to the guest through a H-bond network. For instance, the complexation of [PMo 12 O 40 ] anion by β-and γ-CD results in a one-dimensional columnar structure through a combination of intermolecular [C−H···O=Mo] and [O−H···O] interactions [30]. Unfortunately, the natural CDs as well as their inclusion complexes are of limited aqueous solubility leading to their precipitation. Fortunately, native CDs are effective templates for the generation of a wide range of molecular hosts through chemical modifications.
ii) Modified cyclodextrins
In order to meet specific requirements in the host-guest complex, chemical modifications make it possible to tailor CDs to a particular guest. The hydroxy groups serve as scaffolds on which substituents can easily be introduced. From a chemical synthesis point of view, the reactivity difference between the primary and secondary hydroxy groups allows selective functionalization on the narrow or the wider edge of the truncated cone (Table 1). Access to the gamut of functional groups greatly expands the utility of native and modified CDs in their numerous applications.
iii) Applications of cyclodextrins
As natural CDs and their derivatives are able to encapsulate a wide range of guest molecules into their cavity, they can be used in a wide range of applications including analytical chemistry [21,22,31], agriculture [15], food technology [16], catalysis [23][24][25]32], cosmetics [26], textile processing [28,33], and environmental protection technologies [27,34]. Nevertheless, the first global consumer of CDs is clearly the pharmaceutical industry [35,36]. Indeed, CDs are very useful to form inclusion complexes with a wide range of drugs and become a very valuable tool for the formulator in order to overcome delivery limitations [37,38]. As a result, numerous formulations that use CDs are now on the market worldwide (Table 2). iv) Toxicity and biological effects of native and modified cyclodextrins As safety and toxicity are important criteria for consideration before using CDs in pharmaceutical products, this section deals with toxicological issues. The native α-and β-CD, unlike γ-CD, cannot be hydrolyzed by pancreatic amylases and human salivary but can be fermented by the intestinal microflora. When administered orally, native CDs and hydrophilic derivatives are not absorbed from the human gastrointestinal tract and thus making them practically nontoxic due to their high molecular mass ranging from almost 1 000 to over 2 000 g/mol and their hydrophilic nature with a significant number of H-bond donors and acceptors [39]. Indeed, CDs violate three criteria of the Lipinski's rule: i) no more than 5 H-bond donors, ii) no more than 10 H-bond acceptors, iii) a molecular mass less than 500 g/mol, and iv) an octanol-water partition coefficient (log P) not greater than 5 [40]. As these criteria apply only to absorption by passive diffusion of compounds through cell membranes, the absorption of the native CDs and their hydrophilic derivatives are not allowed in their intact form and any cellular absorption, if it occurs, is by passive transport through cytoplasmic membranes (i.e., by transporter proteins) [41]. In contrast, lipophilic derivatives (e.g., ME-β-CD) interact more readily with membranes than the hydrophilic derivatives, they cannot readily permeate cell membranes (see below) [42]. Moreover, oral administration of alkylated CD derivatives, such as ME-β-CD, is limited by its potential toxicity [43]. Indeed, ME-β-CD is partially absorbed from the gastrointestinal tract into the systemic circulation. Moreover, they have been shown to be toxic after parenteral administration. The opposite holds for hydrophilic CD derivatives, such as HP-β-CD and SBE-β-CD, which are considered safe for parenteral administration. In a general way, the γ-CD, HP-β-CD and SBE-β-CD, S-β-CD and G2-β-CD appear to be globally safer than α-, β-and alkylated CDs which are less suitable for parental administration [44,45]. Table 3 presents the pharmacokinetics and safety overview of some natural and modified CDs. When administered, natural and hydrophilic CD derivatives disappear rapidly from systemic circulation and are distributed to various tissues of the body such as kidney, liver, urinary bladder, etc. Nevertheless, they are mainly renally excreted intact. At high concentrations, α-, β- a Taken from [44,[47][48][49][50][51][52][53]. b Randomly methylated β-CD.
and alkylated CDs present renal damage and dysfunction [46]. In 2008, Stella and He discussed the detailed studies of toxicology, mutagenicity, teratogenicity and carcinogenicity of various CDs [45]. Overt signs of acute toxicity are not apparent for CDs (i.e., no inflammatory response and no cell degeneration). They are also not genotoxic, not teratogenic or mutagenic. However, CDs affect the human organism only at extremely high concentrations. Nevertheless, the principal side effect of natural and modified CDs is probably the cell toxicity. This effect is directly correlated to their hemolytic activities. Indeed, several in vitro studies reported erythrocyte lysis although the toxicological implication in vivo is negligible. The lysis mechanism is related to their capacity to draw phospholipids and cholesterol out of the biological membrane (see below). Based on this, the complexation of endogenous substances are of potential interest for many applications.
Biomolecule/cyclodextrin inclusions complexes
Native and modified CDs can be used to complex certain chemicals produced naturally present in cells and tissues (i.e., endogenous substances). Indeed, CDs are able to form complexes with various biomolecules including lipids, carbohydrates, proteins and nucleic acids. In this section, some biomolecule/CD inclusion complexes are presented.
i) Complexation of lipids and consequences
Lipids are hydrophobic or amphiphilic molecules very diverse, including, among other fats, waxes, sterols, fat-soluble vitamins, phospholipids, mono-, di-and triglycerides, etc. Their amphiphilic nature causes the molecules of certain lipids to organize into liposomes when they are in aqueous medium. This property allows the formation of biological membranes. Indeed, cells and organelles membranes are composed of lipids. Lipids also provide various other biological functions, including cell signaling and storage of metabolic energy by lipogenesis. Bio-logical lipids are basically due to two types of compounds acting as "building blocks": ketoacyl groups and isoprene units. From this point of view, they can be divided into eight categories: fatty acids (and their derivatives: mono-, di-and triglycerides and phospholipids), acylglycerols, phosphoglycerides, sphingolipids, glycolipids and polyketides, which result from the condensation of ketoacyl groups, sterols (e.g., cholesterol) and prenols, which are produced from condensation of isoprene units [54]. These compounds can be easily included inside the CDs because they are hydrophobic or amphiphilic molecules. As mentioned earlier, and as it will become exceeding clear throughout the following sections, the majority of research involving CDs has revolved around their ability to manipulate lipid (phospholipids and cholesterol) composition in different cells [55][56][57][58]. Although numerous studies deal of this topic, the mechanism of this process is poorly investigated (i.e., only the consequences of this phenomenon are reported). For sake of clarity, only some typical examples are reported in this section.
The first well-documented effect of CDs is probably hemolysis which corresponds to the lysis of red blood cells (erythrocytes) and the release of their contents into surrounding fluid (blood plasma). In 1982, Irie and co-workers reported that native CDs are able to cause hemolysis of human erythrocytes [59]. This behavior occurs at relatively high concentrations (>1 mM) and that the degree of cholesterol extraction is a function of the CD used, its concentration, incubation time, temperature. For instance, in given conditions (isotonic solution with similar incubation time and temperature), the observed hemolysis is in the order γ-CD < α-CD < β-CD.
This different effect, observed for native CDs, has been explained by Ohtani et al. in 1989 [58]. As the membrane of erythrocytes is composed of proteins (43%) associated with lipids (49%) and carbohydrates (8%) and as the fraction of cholesterol is 25% of total membrane lipids [54], the proposed explanation is based on the specific interaction of natural CDs with the erythrocyte membrane components. Indeed, α-and β-CD are excellently suited to solubilize phospholipids and cholesterol, respectively, whereas γ-CD is generally less lipidselective. In more detail, the CD affinity for solubilizing various lipid components of the erythrocyte membranes are in the order γ-CD << β-CD < α-CD for phospholipids and α-CD < γ-CD << β-CD for cholesterol [58]. These findings are corroborated by the work of Leventis and Silvius which have reported that β-and γ-CD accelerate the rate of cholesterol transfer by a larger factor than they accelerate the transfer of phospholipid, whereas the opposite is true for α-CD [60]. The hemolytic properties of CDs are a general behavior not limited to human erythrocytes: All mammalian red blood cells are affected by the parent CDs. For instance, dog erythrocytes are also affected by native CDs in the order γ-CD < α-CD < β-CD [61]. Thus, the magnitude of the hemolytic activity observed for dog erythrocytes is consistent with the order of magnitude of human erythrocytes (see discussion above). However, the hemolytic activity is largely influenced by the substituents attached to the CDs. The presence of hydrophilic substituents (e.g., glucosyl, 2-hydroxypropyl, 3-hydroxypropyl, maltosyl, sulfate, sulfobutyl ether, etc.) reduces the hemolytic activity in comparison with the parent CDs while the lipophilic ones (e.g., methylated CDs) demonstrate the strongest hemolytic activities [62]. As for parent CDs, these differences are ascribed to the different solubilization effects of lipid components and their sequestration in the external aqueous phase.
As the hemolysis is attributed to the removal of erythrocyte membrane components, particularly phospholipids and cholesterol, the value of the binding constants between CDs and lipids can be very relevant. To the author's knowledge there is only one paper in the literature that describes the binding constants between CDs (α-or γ-CD) and short phospholipids (i.e., diheptanoylphosphatidylcholine, DHPC) [63]. In this study, the association constants were estimated from 1 H NMR measurements. The results proved that the K 1 values are in the order α-CD < γ-CD while the K 2 values are in the order γ-CD < α-CD. This behavior was attributed to the large cavity of γ-CD which is able to incorporate both alkyl chains of DHPC simultaneously. In contrast, the formation of a 1:2 inclusion complex with α-CD is easier than with γ-CD. These findings are corroborated by Fauvelle and co-workers who reported that α-CD has the strongest affinity to phospholipids (e.g., phosphatidylinositol) [64,65]. In 2000, Nishijo et al. studied the interactions of various CDs with dipalmitoyl, distearoyl, and dimyristoylphosphatidylcholine liposomes. This study highlights that the liposome-CD interaction depends on the length of the fatty acid chain of the phospholipid, the cavity size, and the nature of the substituents at the CD [66]. In the literature, the binding constant between cholesterol and β-CD was estimated around 1.7 × 10 4 M −1 from a solubility method [67,68]. This value proves the good stability of the inclusion complex because of the driving force of complexation: hydrophobic interaction. Despite there is no information on the binding constant observed for the inclusion of cholesterol in γ-CD, the cavity internal diameter of the β-CD and its derivatives perfectly matches the size of the sterol molecules contrary to γ-CD (too large) [69,70]. Moreover, the positive correlation observed between the hemolytic activities of various modified CDs and their ability to solubilize cholesterol reveal that HP-β-CD was shown to be a more efficient cholesterol-acceptor molecule than HP-γ-CD. This is apparently due to the diameter of its internal cavity that matches the size of this molecule. Finally, it is noteworthy that all CDs lose their abilities to induce hemolysis, when their cavities are occupied with guest molecules due to a reduced interaction with the erythrocyte membranes [71]. All these observations support the aforementioned affinities of α-CD for phospholipids and of β-CD for cholesterol.
In addition, sub-hemolytic concentrations of native CDs are also demonstrated to cause shape changes in human erythrocytes. The hemolytic effect is concomitant with shape changes (from biconcave discocyte to stomatocyte or echinocyte) depending on the cavity size of the CDs. For instance, α-and γ-CD induce progressive shape changes from discocytes into stomatocytes and from stomatocytes into spherocytes [58]. In contrast, β-CD leads only to swelling of erythrocytes. Similar effects are found for chemically modified CDs. [74]. Finally, it is noteworthy that the presence of DM-α-CD and DM-β-CD also leads to hemolysis. As cholesterol interacts with markedly higher affinity with sphingolipids (e.g., sphingomyelin) than with Scheme 2: Proposed mechanism for morphological changes in erythrocytes induced by methylated CDs.
common membrane phospholipids, the extraction of cholesterol by DM-β-CD or sphingomyelin by DM-α-CD leads to strong modification of the cholesterol-rich or sphingomyelin-rich lipid rafts, respectively [60]. Therefore, even if the target is significantly different, the final effect is the same (i.e., hemolysis).
It should be noted that this cholesterol and/or phospholipids extraction is not limited to erythrocytes. Indeed, all eukaryotic or prokaryotic cells are affected by the presence of CDs. For instance, the cytotoxicity of native, methylated, and hydroxypropylated α-, β-, and γ-CDs have been studied on an in vitro model of blood-brain barrier by Monnaert and co-workers [75].
The results prove that the native CDs are the most toxic (γ-CD < β-CD < α-CD). As expected, lipid effluxes on the brain capillary endothelial cells in the presence of native CDs reveal that α-CD extracts only phospholipids whereas β-CD is able to remove phospholipids and cholesterol. In contrast, γ-CD is less lipid-selective than the other native CDs. This differential effect compared to the order of magnitude of hemolytic activity (γ-CD < α-CD < β-CD) could be ascribed to the lower cholesterol content in blood-brain barrier cells compared to erythrocytes. Indeed, the cholesterol fraction is markedly higher in erythrocytes than in other cells. As for hemolysis, the presence of hydrophilic substituents (e.g., 2-hydroxypropyl and sulfobutyl ether) annihilates the cytotoxicity while the presence of methyl residues induces cell death of various cells (Caco-2, TR146, PC-12, etc.) [76][77][78]. For instance, cell death induced by DM-β-CD is caused by a marked apoptosis mechanism (i.e., a process by which cells trigger their self-destruction in response to a signal which leads to cell changes prior to death) for NR8383, A549 and Jurkat cells [79]. This apoptosis process results from cholesterol extraction leading to inhibition of the activation of PI3K-Akt-Bad pathway. The presence of DM-α-CD had repercussions that were totally opposite to the DM-β-CD. Indeed, the cell death results from a non-apoptotic mechanism (i.e., necrosis). This differential effect could be attributed to a dissimilarity of interaction between the methylated CDs with the cholesterol-rich lipid rafts and with the sphingomyelin-rich domains for DM-β-CD and DM-α-CD, respectively. These results suggest that lipid rafts of cell membranes would be involved in cell death and cellular function.
However, the mechanism of cholesterol extraction mediated by CDs is still open to discussion. For instance, Yancey et al. proposed that CDs diffuse into the proximity of the erythrocyte membrane leading to lipid complexation without complete desorption in the aqueous phase [80]. In contrast, Besenicar et al.
proposed that CDs complex lipids during their naturally exchange from the membrane to the aqueous phase [81]. Stella and He supposed that the CDs interact directly with the membrane of the cells prior to lipid efflux [45]. In the same idea, Mascetti et al. proposed that CDs interact directly with cholesterol [82]. Based on molecular simulations, López et al. proposed the following mechanism: i) association of CDs in aqueous solution to form dimers, ii) binding of dimers at the membrane surface, iii) extraction and complexation of cholesterol, and iv) desorption of CD/cholesterol in the aqueous solution [83]. However, whatever the molecular mechanism, the lipid efflux mediated by CDs is clearly different from those of surfactants. Indeed, at low concentrations, the mechanism involves the penetration of the detergent molecules into the lipid membrane leading to increase its fluidity. In contrast, at higher concentrations, the extraction of membrane constituents is ensured by micellar solubilization [84,85].
ii) Complexation of peptides and proteins and some applications
Proteins are polymers of amino acids covalently linked through peptide bonds. The nature of the proteins is determined primarily by their amino acid sequence, which constitutes their primary structure. Amino acids have very different chemical properties; their arrangement along the polypeptide chain determines their spatial arrangement. This is described locally by their secondary and tertiary structures. The secondary structure describes the arrangement of amino acid residues observed at the atomic scale stabilized mainly by H-bonds (e.g., α-helix, β-sheet and turns). The tertiary structure corresponds to the general shape of the observable protein across the whole molecule. It describes the interactions between the different elements of the secondary structure. Finally, the assembly of several protein subunits to form a functional complex is described by the quaternary structure [54]. As some amino acids have hydrophobic side chains (e.g., alanine, valine, leucine, isoleucine, proline, phenylalanine, tryptophan, cysteine and methionine), they can be easily included inside the CDs. This complexation leads to modification of the protein. For sake of clarity, only some typical examples are reported in this section.
In their paper on the differential effects of natives CDs, Ohtani and co-workers highlighted that, in addition to the extraction of lipids, these CDs are also able to solubilize proteins in the order α-CD < γ-CD << β-CD [58]. In 1991, Sharma and Janis studied the interaction of CDs with hydrophobic proteins leading to the formation of soluble and insoluble complexes [86]. CDs caused the precipitation of lipoproteins in the order γ-CD < α-CD < β-CD. This behavior could be ascribed to the formation of inclusion complexes. However, the presented data did not exclude the formation of exclusion complexes.
Several years later, Horský and Pitha reported a study on the interaction of CDs with peptides containing aromatic amino acids [87]. From competitive spectrophotometry measurements, with p-nitrophenol as a competing reagent at pH 7.4, the authors determined the stability constants of aromatic amino acids and their oligopeptides with α-, β-, HP-β-, and ME-β-CD. The estimated constants of free L-phenylalanine (Phe) increased in the order ME-β-CD ≈ HP-β-CD < α-CD < β-CD. Moreover, the results proved that the stability of oligopeptides containing Phe is higher than that of Phe itself. For instance, the binding constant of free Phe with β-CD was estimated at 17 M −1 whereas with the Gly-Gly-Phe tripeptide, the binding constant was 89 M −1 . Nevertheless, the complexation occurs when the native functional form of the proteins is unfolded.
In 1996, Bekos et al. investigated the role of the L-tyrosine (Tyr) residue in the binding of pentapeptides to α-and β-CD [88]. The two peptides used in this study were: Tyr-Ile-Gly-Ser-Arg (YIGSR) and Tyr-Gly-Gly-Phe-Leu (YGGFL). The former interacts specifically with the integrin receptors on specific neuronal cells whereas the latter is known to bind to brain receptor sites. From steady-state fluorescence spectroscopy, the estimated constants of free Tyr increased in the order α-CD < β-CD. As in the previous study, the stability constants of pentapeptides containing Tyr with β-CD were higher than that of Tyr itself (48,224, and 123 M -1 for free Try, YIGSR, and YGGFL, respectively). Therefore, the pentapeptide conformation affects the stability of the pentapeptide/β-CD inclusion complex. In contrast, the pentapeptide/α-CD inclusion complex was not affected by the oligopeptide conformation (27,20, and 20 M −1 for free Try, YIGSR, and YGGFL, respectively).
The same year, Lovatt and co-workers investigated the dissociation of bovine insulin oligomers induced by aqueous solution of α-, HP-β-, and ME-β-CD [89]. The energetics of the dissociation of insulin oligomers have been investigated by microcalorimetry. As expected, the dissociation of insulin oligomers is increased upon the addition of CDs. This dissociation is clearly related to the interaction of these CDs with the protein side chains. Indeed, the dissociation of insulin oligomers is endothermic without CDs whereas in the presence of α-CD, the dissociation is less endothermic due to the exothermic binding of α-CD to exposed groups on insulin monomers after dissociation. Therefore, the α-CD facilitates the oligomer dissociation.
In contrast, the dissociation is observed to be more endothermic in the presence of HP-β-and ME-β-CD although oligomer dissociation is induced. The authors suggest that the binding of HP-β-and ME-β-CD is endothermic and entropy driven.
As native and modified CDs are able to complex some amino acids which constitute peptides and proteins, these molecules can be useful for their separation by capillary zone electrophoresis. In this context, Rathore and Horváth reported that carboxymethyl-β-CD (CM-β-CD) in the electrophoretic medium (aqueous buffer solution, pH 2.5) enhanced the separation in capillary zone electrophoresis (raw fused-silica) of standard proteins such as α-chymotrypsinogen A, cytochrome c, lysozyme and ribonuclease A [90]. The obtained results proved that the separation of peptides and proteins can be enhanced by adding CDs to the electrophoretic medium. Unfortunately, only certain CDs can be used as selectivity enhancers. Indeed, in contrast to CM-β-CD, the addition of DM-β-CD had no effect on the separation of the mentioned proteins and peptides.
In 2006, the group of Yamamoto studied the effect of β-, γ-, G1-β-, and Me-α-CD on the thermal stability of an aqueous buffered solution of chicken egg white lysozyme by circular dichroism and fluorescence spectroscopy [91]. The thermal stability is significantly lowered in the presence of β-, γ-, and G1-β-CD whereas the opposite is true for the Me-α-CD. It should be noted that the thermal stability reduction is very important for G1-β-CD. Based on fluorescence spectroscopy, the authors suggested that CDs include the side chains of tryptophan (Trp) residues of lysozyme within their internal cavities to diminish the hydrophobicity of the hydrophobic core of lysozyme and consequently to lower the thermal stability (Scheme 3). In addition, some CD molecules persist in binding to the side chains of Trp residues to retard the renaturation of lysozyme.
Scheme 3:
Proposed mechanism for the conformational change of egg white lysozyme with temperature elevating in the presence of modified CDs.
From the findings described above, it can be presumed that the effect of CD is directly linked to its ability to complex Trp and the behavior of Me-α-CD can be related to its cavity size. The binding constants are always weaker with modified α-CD than with functionalized β-CD (see discussion above). In 2009, a 1 H NMR spectroscopic study revealed that the 1 H NMR signals corresponding to Trp residues were shifted upon the addition of G1-β-CD due to encapsulation of the tryptophan residues in the G1-β-CD cavity [92]. In addition, the 1 H NMR signals for cysteine 64 and isoleucine 98 were also influenced to a considerable extent with the addition of G1-β-CD. This allows the conclusion that these hydrophobic amino acid residues are also included by this CD. These results are highly compatible with the very important thermal stability reduction observed in the presence of G1-β-CD. Therefore, the interaction of CDs with proteins is very complicated due to the presence of many binding sites.
Thaumatins refer to a family of proteins present in the sweetness of the katemfe fruit (Thaumatococcus daniellii Bennett) endemic in West Africa. It is used worldwide in human nutrition and pharmacology as a sweetener, flavor enhancer or to mask bitterness and it is 100,000 times sweeter than sucrose. Thaumatin has been shown to bind to G-protein-coupled receptors (GPCRs) which are transmembrane proteins, responsible for signal transduction. Therefore, the interaction of CDs with thaumatin could be used to modify the interaction of thaumatin with GPCRs and to modify its sweet-taste profile. In this context, Thordarson et al. studied the interaction of α-CD with thaumatin [93]. The 1D and 2D NMR experiments revealed that α-CD binds to aromatic residues of thaumatin with a binding constant of 8.5 M −1 . As the active binding site of the thaumatin protein is known, the authors have synthesized a heptapeptide (Lys-Thr-Gly-Asp-Arg-Gly-Phe) that mimics this binding site of thaumatin. The results show that α-CD binds to the C-terminal solvent accessible phenylalanine residue with a binding constant of 8.8 M −1 . As the α-CD may interact with the active binding site on thaumatin, the regulation of the interaction of thaumatin with GPCRs is probably possible.
Varca et al. published on the possible applications in the formulations of protein-like structures, such as enzymes, peptides and amino acids, for pharmaceutical applications [94]. The authors highlight that the formation of cyclodextrin/protein supramolecular complexes can be used to improve their stabilizations. However, the intrinsic characteristics of guest proteins can be also modified. In addition, it is exceedingly clear throughout this paragraph that peptides and proteins have moderate binding constants with CDs compared to lipids.
iii) Complexation of carbohydrates
The International Union of Pure and Applied Chemistry (IUPAC) defines carbohydrates as a class of organic compounds containing a carbonyl group (aldehyde or ketone) and at least two hydroxy residues (OH). It is noteworthy that substances derived from monosaccharides by reduction of the carbonyl group, by oxidation of at least one functional group at the end of the chain in carboxylic acid, by replacement of one or more hydroxy groups by a hydrogen atom, an amine, a thiol or any similar group are also called carbohydrates. Carbohydrates are, with proteins and lipids, essential constituents of living organisms because they are key biological intermediates for energy storage. In autotrophs, such as plants, sugars are converted into starch whereas for heterotrophic organisms, such as animals, they are stored as glycogen. However, polysaccharides serve also as structural components: cellulose for plants and chitin for arthropods. Moreover, saccharides and their derivatives play key roles in the immune system, fertilization, blood clotting, information transfer, etc. For instance, the 5-carbon monosaccharide ribose forms the backbone of the genetic molecule RNA (see below) and is also an important component of coenzymes (ATP, FAD and NAD).
In 1992, Aoyama et al. reported on the selective complexation of pentoses and hexoses by β-CD [95]. Based on competitive inhibition of the 8-anilinonaphthalene-1-sulfonate binding followed by fluorescence measurements, the binding constants can be estimated. The obtained results reveal that D-ribose, D-and L-arabinose, D-xylose, D-lyxose, D-2-deoxyribose, and methyl β-D-ribopyranoside were complexed by β-CD (binding constants ≤14 M −1 ). In contrast, aldohexoses and their derivatives (D-glucose, D-galactose, D-mannose, D-and L-fucose, and methyl α-D-fucopyranoside) were not complexed (binding constants ≈ "0" M −1 ). These binding constants can be directly correlated to the hydrophobicity of the sugar. Nevertheless, the H-bonds between the hydroxy groups of bound sugar and the OH groups of β-CD are also extremely important for determining the structure and for the selectivity of the complex.
It is noteworthy that several other publications have studied the interaction of D-glucose with native CDs. For instance, Hirsh and co-workers estimated the binding constants of D-glucose to α-CD and β-CD at 450 and 420 M −1 , respectively, from blood glucose meter [96]. In contrast, Hacket et al. determined the binding constant of D-glucose to β-CD at 0.6 M −1 by fluorimetric competition titrations [97]. The results obtained by Hacket et al. are closer to the values published by Aoyama and co-workers. In addition, it is quite logical that D-glucose interacts weakly with native CDs due to its size and its hydrophilic property. However, from these conflicting results, Turner proposed to use kinetic measurements to determine the association constants of several sugars with β-CD [98]. This new method has been published in 2000 [99] and the binding constants obtained by these three groups are reported in Table 4. [95]. b Taken from [97]. c Taken from [98].
These values are relatively close to each other and the sugar/β-CD binding constants increase in the order of D-galactose ≈ D-glucose < D-mannose < D-arabinose < D-xylose < D-ribose. This magnitude is consistent with the order of magnitude of the sugar hydrophobicity scale determined by Janado and Yano in 1985 (Scheme 4) [100]. This hydrophobicity scale is corroborated by Wei and Pohorille for the hexose series [101]. Therefore, even if all the literature values for the binding constants obtained by the different methods are not especially self-consistent, it is clear that β-CD can selectively recognize pentoses in contrast to hexoses [102]. However, the binding constants remain very small (see Table 4). Based on all these results, the interaction of CDs with carbohydrates in aqueous solution can be completely neglected. Similar conclusions were made by Paal and Szeijtli [103].
iv) Complexation of nucleic acids
Nucleic acids are macromolecules, where the monomer is the nucleotide. Each nucleotide has three components: a 5-carbon sugar, a phosphate group, and a nitrogenous base. These nucleotides are joined by phosphodiester bonds. There are two types of nucleic acids according to the sugar: deoxyribose and ribose for deoxyribonucleic acid, DNA, and ribonucleic acid, RNA. Nucleic acids function in encoding, transmitting and expressing genetic information. As nucleic acids allow the synthesis of proteins their modifications result in numerous consequences. As earlier mentioned, CDs are used for numerous commercial applications. Therefore, the investigation of nucleic acid interactions (e.g., DNA or RNA) with various types of CDs is important to evaluate possible intracellular effects of CDs.
The interactions between native CDs and nucleic acids are still a subject of intense discussion along the past years. For instance, the results found in the literature for the α-CD are contradictory. Indeed, the works of Komiyama [104], Tee [105], and Spies et al. [106] suggested that α-CD cannot interact with DNA because the cavity of this molecule is too small to accommodate DNA base pairs. All these results support the work of Hoffmann and Bock who examined the complex formation between different CDs and nucleotides [107]. In contrast, in a more recent work, Jaffer et al. have found that α-CD can form H-bonds with DNA base pairs that flip out spontaneously at room temperature leading to DNA denaturation [108]. Consequently, exclusion and inclusion complexes are achieved with α-and β-CD, respectively. Nevertheless, it is noteworthy that when a complex is formed with β-CD, the ribose and phosphate groups of the nucleotides exert also a stabilizing effect by establishing H-bonds with the outer rim of the CD molecules. Interestingly, the extent of complexation depends significantly on the base composition and the doubleor triple-helical structures. In contrast to native CDs, cationic CDs are known to interact strongly with DNA [109,110]. As consequence, CDs can be used to complex DNA and to encapsulate it into liposomes for potential gene therapy applications [111]. However, other formulation can be used to obtain nonviral vectors [112].
Since anthrylamines have potent DNA-intercalating properties, Ikeda et al. have attached an anthrylamine to a β-CD [113]. The obtained anthryl(alkylamino)-β-CD was used as chemically switched DNA intercalator. However, as the anthryl residue is locked in the CD cavity, its intercalation into DNA is not possible in aqueous solution. Upon addition of a ligand that is tightly bound in the CD cavity (e.g., 1-adamantanol), the host molecule releases the anthryl unit, which then leads to strong intercalation with the double-stranded DNA molecule leading to structural distortions (Scheme 5). This behavior was clearly established from 1 H NMR spectroscopy (shifts and broadening of anthryl signals) in the presence of the 1-adamantanol guest. This concept could be very useful in nucleic acid reactions of medicinal and biotechnological importance for new drug delivery systems. Unfortunately, the binding constants between CDs and nucleic acids remain relatively modest and close to those observed for peptides and proteins (see above).
Current and potential medical and biological applications
As mentioned earlier, CDs are able to complex biomolecules. Unfortunately, the strength of this behavior depends of the molecular structure. For instance, the binding constants increased in the order carbohydrates << nucleic acids << proteins < lipids. Consequently, the majority of biological investigations about CDs involved their ability to extract lipids (cholesterol or phospholipids) from the plasma membrane. As expected, this capacity can be very useful for numerous applications. For sake of clarity, only some typical applications of CD/cellular interactions are reported.
i) Cell membrane cholesterol efflux
As previously mentioned, CDs are able to interact and to complex cholesterol and others lipids [114]. A great number of publications deals with this topic and with the consequences of this phenomenon (e.g., hemolysis or cytotoxicity, see section above). Since the nineties, β-CDs are known to have a high affinity, in vitro, for sterols as compared to other lipids [58,115]. Consequently, these molecules can be used to manipulate the cellular cholesterol content, to modify cholesterol metabolism [115,116] and to stimulate the removal of cholesterol from a variety of cells in culture [80,[117][118][119]. It should be noted that the cholesterol extraction by CDs is both time and dose dependent. In addition, the exposure of cells to modified β-CD in the 10-100 mM concentration range results in high rates of cell cholesterol efflux. Some typical examples are presented in this section.
CDs have been used to demonstrate the presence of different kinetic pools of cholesterol within cell models. Indeed, CDs have been used recently to monitor the movement of cholesterol from monolayers [57] or liposome bilayers [60]. For instance, a typical paper has been published in 2001 by Leventis and Silvius [60]. In order to characterize the CDs capacity to bind cholesterol, the authors examined the catalytic transfer of cholesterol between liposomes composed of 1-stearoyl-2-oleoyl phosphatidylcholine (SOPC) or SOPC/cholesterol. In the steady state under such conditions where a negligible fraction of the sterol is bound to CD (i.e., in the presence of submillimolar concentrations), β-and γ-CDs accelerate considerably the rate of cholesterol transfer between lipid vesicles (63-and 64-fold, respectively). This improvement is clearly greater than the transfer of phospholipid. The opposite is true for α-and methylβ-CD. The kinetics of CD-mediated cholesterol transfer indicates that the transbilayer flip-flop of cholesterol is very rapid (halftime < 1-2 min at 37 °C). In the case of β-CD, the author reported on the relative affinities of cholesterol for different phospholipids. As expected, strong variations in cholesterol affinity were observed depending on the degree of chain unsaturation and the headgroup structure. The transfer revealed that cholesterol interacts with markedly higher affinity with sphingolipids than with other membrane phospholipids. As extension of this work, Huang and London highlighted the possibility of preparing asymmetric vesicles during the exchange of membrane lipids between different vesicles by selective inclusion of phospholipids and/or cholesterol into the CD cavity [120]. Moreover, CDs can also be used to monitor the intracellular movement of cholesterol in tissue culture cells [121].
As the cholesterol extraction by CDs occurs usually at very high rates, CDs have been used to demonstrate the presence of dif-ferent kinetic pools of cholesterol within cells. Unfortunately, only few papers have studied the dynamics of this process on cells. For instance, the kinetics of cholesterol efflux have been examined in different cell lines such as fibroblasts [117], human erythrocytes [122], rat cerebellar neurons [123], differentiated human neurons and astrocytes [124], etc. All these results indicated that CDs induce cholesterol, sphingolipids, and phospholipids extraction from the cytoplasmic membrane typically in a range of 50-90% of the original amount. Castagne and co-workers studied the cholesterol extraction of native and modified β-CDs on endothelial cells (HUVEC) [125]. The measurement of the residual cholesterol content of cells reveals that cholesterol was extracted in a dose dependent relationship. As expected, a correlation was obtained between the cytotoxicity and the affinity for cholesterol. The affinity of CDs for cholesterol was classified in the order β-CD < HP-β-CD < Me-β-CD. Similar results are obtained with other biological membranes [117][118][119][120][121][122][123][124][125][126]. Another typical example has been published by Steck et al. The authors investigated the cholesterol movement created by the treatment of human erythrocytes with Me-β-CD [122]. The results show that the rate of efflux is approximately three orders of magnitude higher than the cholesterol transfer from cells to synthetic vesicles. Therefore, Me-β-CDs are very efficient to extract large amounts of membrane cholesterol at a very high rate. CDs can also catalyze the exchange of cholesterol between serum lipoproteins and cells [56].
ii) Cardiovascular diseases
The atherosclerosis vascular disease (ASVD) is caused by an inflammation of the arterial wall that is caused by increased cholesterol blood levels and an accumulation of cholesterol crystals in the subendothelial spaces leading to arteriosclerotic plaque formation [127]. It is noteworthy that the cholesterol represents a maximum of 10% of the total mass of plaque. Consequently, the elasticity of the artery walls is reduced, pulse pressure can be modified and blood clot can be formed (Scheme 6). Cardiovascular disease is currently the leading cause of death worldwide. As plasma levels of cholesterol are associated with cardiovascular morbidity and mortality, the use of CDs to solubilize and to remove cholesterol (and plaque) is very promising to combat this deadly condition.
It is noteworthy that high concentrations of modified β-CDs result in rates of cell cholesterol efflux far in excess of those achieved with physiological cholesterol acceptors such as highdensity lipoproteins (HDL). Indeed, plasma levels of HDL are inversely associated with cardiovascular morbidity and mortality because this lipoprotein is responsible for transporting cholesterol to the liver where it can be eliminated [128]. The opposite holds for low-density lipoproteins (LDL). Their function is to transport cholesterol, free or esterified, in the blood and through the body to bring them to the cells. HDL particles also reduce macrophage accumulation, and thus help prevent or even regress atherosclerosis. The alteration of cellular cholesterol regulation, named the reverse cholesterol transport, RCT, could be used to block atheroprogression associated with different severity degrees of atherosclerosis pathogenesis. From the pioneering works of Irie et al., it became clear that CDs can be useful to prevent atherosclerosis [115,129].
As the critical step in the formation of atherosclerosis plaque is the recruitment of monocytes (a type of white blood cells), which can differentiate into macrophages and ingest LDL, Murphy et al. proposed to prevent the activation/expression of monocyte adhesion [130]. For this cell adhesion, molecules such as CD11b are required. Therefore, the authors reported that β-CD, but not its cholesterol complex, inhibits CD11b activation. As the cholesterol content of lipid rafts diminished after treatment with the cholesterol acceptors, the authors proposed that the cholesterol efflux from serum monocytes is the main mechanism and is probably an effective means of inhibiting the development of atherosclerotic plaques.
In 2015, Montecucco et al. reported the anti-atherosclerotic action of KLEPTOSE ® CRYSMEB (a mixture of methylated β-CD where 2-O-methylations are dominant) in atherosclerotic mouse models [131]. As expected, their interfering action with cholesterol metabolism has a positive impact on atherogenesis, lipid profile and atherosclerotic plaque inflammation. In addition to reduce triglyceride serum levels, this CD reduces cholesterol accumulation in atherosclerotic plaques by the modification of HDL-cholesterol levels. It is noteworthy that HDL and apolipoprotein A-I (ApoA-I) cause a dose-dependent reduction in the activation of CD11b (i.e., anti-inflammatory effect on monocytes) through interactions with several receptors and ABCA1 for HDL and ApoA-I, respectively.
However, the process, which leads to an aberrant accumulation of cholesterol in artery walls forming atherosclerotic plaques, is complex. Thus the alteration of RCT as well as the expression and the functionality of transporters (ABCA1, ABCG1, and SR-BI) involved in this process could be very useful in the fight against atherosclerosis pathogenesis. As pointed out by Coisne and co-workers, "RCT alterations have been poorly studied at the arterial endothelial cell and smooth muscle cells levels" [132]. Consequently, the authors investigated the effect of different methylated β-CDs on the RCT of arterial endothelial and smooth muscle cells. It should be noted that these two cell types express basal levels of ABCA1 and SR-BI whereas ABCG1 was solely found in arterial endothelial cells. The authors highlighted the correlation between the percentages of cholesterol extraction and the methylation degree of the CDs. This effect was clearly independent of the membrane composition. The expression levels of ABCA1 and ABCG1, as well as the cholesterol efflux to ApoA-I and HDL, were reduced due to cholesterol-methylated β-CD interaction. Consequently, the cellular cholesterol involved in atherosclerotic lesions is lowered and the expression of ABCA1 and ABCG1 transporters involved in RCT is clearly modulated.
In 2016, Zimmer et al. published on the effect of HP-β-CD in order to reduce atherosclerotic plaques [133]. The HP-β-CD can be used to dissolve cholesterol crystal (responsible for the complex inflammatory response) which can be excreted from the body in urine. Mice were fed with a cholesterol-rich diet for 12 weeks in order to promote fatty plaques in their blood vessels (i.e., to obtain atherosclerotic mice). After 8 weeks, they started the injection of HP-β-CD (2 injections by week). Over the remaining four weeks, the authors observed a plaque reduction in atherosclerotic mice that had consumed HP-β-CD compared with plaques in the blood vessels of untreated animals (≈46% reduction). From a mechanistic point of view, the researchers suspect that the CD boosts the activity of macro-phages, enabling them to attack excess cholesterol without causing inflammation. Indeed, CD increases liver X receptor (LXR) involved in the antiatherosclerotic and anti-inflammatory effects as well as in the RCT improvement.
Moreover, α-CD can also be used to reduce LDL cholesterol and alters plasma fatty acid profile [134,135]. In 2016, a double blind, placebo-controlled clinical trial has been published on the effect of oral α-CD [136]. After 12 to 14 weeks, a daily 6 gram dose of α-CD allowed to reduce fasting plasma glucose levels (1.6%, p < 0.05) and insulin index (11%, p < 0.04) in 75 healthy men and women. In addition, the LDL cholesterol levels were reduced by 10% (p < 0.045) compared with placebo. This CD was well tolerated and no serious adverse events were reported. Only about 8% of patients treated with α-CD reported side effect such as minor gastrointestinal symptoms (3% for the placebo). Consequently, the use of α-CD, safe and well tolerated, showed a reduction in LDL cholesterol, and an improvement of fasting plasma glucose.
The ability of CDs to change the contractibility of arterial smooth muscles indicates that the cellular cholesterol level is an extremely important factor for the cardiovascular system. Continued research on this front could potentially lead to major advancement in the fight against heart disease.
iii) Neurologic diseases
Like in other body systems, the cells of the nervous system are also susceptible to cholesterol extraction mediated by CDs. In the present section, for sake of clarity, only the potential applications of CDs to fight the Alzheimer's and Niemann-Pick type C diseases (AD and NPC, respectively) are reported.
AD is a chronic neurodegenerative disease which represents 60% to 70% of cases of dementia. This disease is characterized by the formation of amyloid plaques in the brain and is often associated to the cerebral accumulation of amyloidogenic peptides (Aβ42). This production is mediated by two neuronal enzymes (β-and γ-secretase) which can be inhibited by methylated β-CDs via cholesterol depletion [137]. Additionally, Yao and co-workers demonstrated that HP-β-CD reduces cell membrane cholesterol accumulation in N2a cells overexpressing Swedish mutant APP (SwN2a) [138]. Moreover, this CD dramatically lowered the levels of Aβ42 in cells as well as the amyloid plaque deposition by reduction of APP protein β cleavage and by up-regulation of the gene expression involved in cholesterol transport. In cell models, this CD also improved clearance mechanisms.
CDs also exert significant beneficial effects in NPC disease, which shares neuropathological features with AD. This disorder is characterized by an abnormal endosomal/lysosomal storage disease associated with genetic mutations in NPC1 and NPC2 genes coding for proteins involved in the intracellular cholesterol transport. Consequently, functions of the impaired proteins cause a progressive neurodegeneration as well as liver and lung diseases. As these two proteins act in tandem and promote the export of cholesterol from endosomes/lysosomes, CDs can bypass the functions of NPC1 and NPC2 and can trap and transport membrane-stored cholesterol from endosomes/lysosomes [139]. This ability of CDs to sequester and to transport cholesterol could potentially lead to major advancements in our ability to fight neurodegenerative diseases.
iv) Antipathogen activities
Cholesterol levels in the plasma membrane are extremely important in many parts of the viral infection process such as the entry and release of virions from the host cell as well as for the transport of various viral proteins. CDs have a clear antiviral activity against influenza virus [140], human immunodeficiency virus (HIV-1) [141], murine corona virus [142], poliovirus [143], human T cell leukemia virus (HTLV-1) [144], Newcastle virus [145,146], varicella-zoster [147], duck and human hepatitis B virus [148,149], bluetongue virus [150], etc. In these cases, the ability of CDs to decrease membrane cholesterol was proposed as antiviral mechanism. Nevertheless, the biological effects of the CDs can be classified according to their role: i) to impede the viral entry in the host cell, ii) to decrease the relative infectivity of the virions, iii) to decrease the observed viral titer, and iv) to disrupt the surface transport of influenza virus hemagglutinin. Few typical examples of the CD effect on the pathogenicity of several viruses are reported.
The HIV is a widely studied virus in terms of the effects of CDs. For instance, sulfated CDs are able to inhibit HIV infection [151,152]. In 1998, Leydet et al. demonstrated anti-HIV and anticytomegalovirus activity of several charged CD derivatives [153]. In 2008, Liao et al. reported that HP-β-CD exhibits also an anti-HIV activity based on cholesterol depletion [154]. However, the mechanism had not yet been determined. Since the membrane cholesterol [155] and lipid raft-based receptors [156] are strictly required for infectivity and HIV entry, CDs are excellent candidates for its use as a chemical barrier for AIDS prophylaxis.
Another common viral disease is caused by herpes simplex virus (HSV) leading to several distinct medical disorders including orofacial and genital herpes or encephalitis [157]. In this context, the anti-HSV properties of native CDs (α-and β-CD) have been estimated against HSV-1 and HSV-2 [158]. The antiviral properties were clearly dependent on the cavity size: α-CD exhibited no significant antiherpetic activity, while, under similar conditions, β-CD reduced both the cell-free and cell-associated virus more effectively than acyclovir (i.e., antiherpetic drug). Indeed, the results reveal an almost complete protection of Vero cells against acyclovir-sensitive and acyclovir-resistant strains of HSV. The ability of β-CD to impede virus replication is proposed as antiviral mechanism.
The potential occurrence of synergistic effects presents a special case, and may occur when one substance increases the activity of another. Currently, gaps in our knowledge of the circumstances under which such effects may occur (e.g., mixture composition, contact time, species, and exposure concentrations) often hamper predictive approaches. However, since the CDs are able to extract cholesterol and other lipids from the viral membrane, it is likely that their combination with virucides or antiviral drugs which act on the same target results in a synergistic effect. Based on this assumption, our group studied the combination of di-n-decyldimethylammonium chloride, [DiC 10 ][Cl] (the most widely used cationic surfactant with intrinsic virucidal activity), and native CDs (α-, β-and γ-CD) [159]. A marked synergism was observed with γ-CD against lipid-containing deoxyribonucleic and ribonucleic acid viruses (HSV-1, respiratory syncytial virus, RSV), and vaccinia viruses, VACV). Indeed, noticeable reductions of the [DiC 10 ][Cl] concentration (i.e., active virucide) were obtained: 72, 40 and 85% against HSV-1, RSV and VACV, respectively. In all cases, submillimolar [DiC 10 ][Cl] and γ-CD concentrations were re-quired to obtain a "6-log reduction" (equivalent to 99.9999% reduction) of the viral titer. Therefore, for these diluted solutions, free CD and [DiC 10 ] species prevail due to the Le Châtelier's principle. Moreover, the micellization equilibrium is not relevant as the virucidal activity was clearly obtained in the premicellar region. Thus, the proposed mechanism of the synergy is based on the ability of CD to extract rapidly cholesterol from the viral envelope. Indeed, γ-CD catalyzes the rapid exchange of cholesterol between the viral envelope and the aqueous solution. The sequestration of cholesterol in the bulk phase facilitates the [DiC 10 ] insertion within the lipid envelope which leads to the virus inactivation (Scheme 7). This means that γ-CD accelerates the rate of cholesterol extraction by a larger factor than α-or β-CD. The proposed mechanism is highly compatible with the results of Leventis and Silvius (see above) [60]. These results demonstrate a clear effect of CDs on the "viability" of enveloped viruses and provide evidences of their potential use in order to improve the efficiency of common antiviral medications.
As cholesterol extraction is general and not limited to viral infections, a whole range of studies have shown that the presence of CDs impedes the entry of bacteria, fungi and parasites into host cells. This effect has been demonstrated for Plasmodium species [160], Campylobacter jejuni [161], Leishmania donovani [162], etc. and this behavior can be explained by the vital role of the lipid rafts in the binding and the entry of pathogens into host cells. Therefore, synergistic effects can also alone. This behavior was attributed to the interaction of β-CD with the lipid membrane components [163]. Other relevant examples can be found in the review of Macaev et al. [164].
Conclusion
This review proposes an overview of the current and potential applications of CDs throughout their interactions with endogenous substances that originate from within an organism, tissue or cell. The majority of these applications are based on the capacity of CDs to withdraw cholesterol of the plasma membrane. This behavior presents several applications such as cholesterol manipulation, control of viral and bacterial infections, treatment of Alzheimer's and heart diseases, etc. Moreover, CDs present a viable basis in the context of "green pharmacy and medicine". In the last decade, the concept of "ecofriendly pharmacy" emerged in response to the Kreisberg's question: "what clinicians can do to reduce the environmental impacts of medications" [165]? Of course, the answers are based on similar principles than green chemistry initially developed by Anastas and Warner [166]. The principles cover various concepts such as: i) the use of bio-sourced ingredients, ii) the use of "green concepts" during the production (chemicals, synthesis processes, life cycle engineering, packaging, waste management), iii) the reduction of the negative impact of medication transportations, iv) the reduction of healthcare environmental footprint, v) the reduction of the use of pharmaceuticals and, vi) the improvement of the ultimate drug disposal with the use of take-back programs [167]. As CDs are bio-sourced compounds with very low toxicity dangers and easily biodegradable, they can be used to obtain more sustainable drug formulations in which CDs act as an active green ingredient and not only as an excipient. It is noteworthy that these CDs can be used alone or in combination with common petro-sourced medications. If a synergistic effect between the two molecules is obtained, a significant amount of the drug can be replaced by eco-and biocompatible CDs whilst maintaining the same biological activity. This is particularly interesting as it solves at least partially the negative impact of pharmaceutical formulations to the environment. Consequently, in this context of "greener pharmacy", CDs will contribute without doubt to preserve our planet in the coming years. | 12,418 | 2016-12-07T00:00:00.000 | [
"Chemistry",
"Environmental Science",
"Medicine"
] |
Targeting Signaling Pathway Downstream of RIG-I/MAVS in the CNS Stimulates Production of Endogenous Type I IFN and Suppresses EAE
Type I interferons (IFN), including IFNβ, play a protective role in multiple sclerosis (MS) and its animal model, experimental autoimmune encephalomyelitis (EAE). Type I IFNs are induced by the stimulation of innate signaling, including via cytoplasmic RIG-I-like receptors. In the present study, we investigated the potential effect of a chimeric protein containing the key domain of RIG-I signaling in the production of CNS endogenous IFNβ and asked whether this would exert a therapeutic effect against EAE. We intrathecally administered an adeno-associated virus vector (AAV) encoding a fusion protein comprising RIG-I 2CARD domains (C) and the first 200 amino acids of mitochondrial antiviral-signaling protein (MAVS) (M) (AAV-CM). In vivo imaging in IFNβ/luciferase reporter mice revealed that a single intrathecal injection of AAV-CM resulted in dose-dependent and sustained IFNβ expression within the CNS. IFNβ expression was significantly increased for 7 days. Immunofluorescent staining in IFNβ-YFP reporter mice revealed extraparenchymal CD45+ cells, choroid plexus, and astrocytes as sources of IFNβ. Moreover, intrathecal administration of AAV-CM at the onset of EAE induced the suppression of EAE, which was IFN-I-dependent. These findings suggest that accessing the signaling pathway downstream of RIG-I represents a promising therapeutic strategy for inflammatory CNS diseases, such as MS.
Introduction
Interferon beta (IFNβ), a member of the type I IFN family, has been shown to play a protective role in multiple sclerosis (MS) and experimental autoimmune encephalomyelitis (EAE), the most common animal model used to understand aspects of MS [1]. Type I IFNs are induced by the activation of innate receptors, including Toll-like receptors (TLR) and retinoic acid-inducible gene I (RIG-I), that recognize pathogen-or dangerspecific signatures [2]. Innate receptors constitute one of the mechanisms involved in the regulation of inflammation in the CNS, and accessing these pathways may provide a potential therapeutic target for regulating autoimmune inflammation in MS.
Stimulation of innate receptors within the CNS has been shown to induce IFNβ and infiltration of myeloid cells with an EAE-suppressive function [3]. We previously showed that a single intrathecal injection of different innate ligands induced transient expression of endogenous IFNβ in the CNS, recruited myeloid cells to the CNS and transiently suppressed EAE [4,5].
Signaling via RIG-I, a cytoplasmic RNA sensor associated with mitochondrial antiviralsignaling protein (MAVS), plays a critical role in the induction of IFNβ [6,7] and has protective functions in EAE [8]. In the present study, we targeted the signaling pathway downstream of RIG-I in order to stimulate the production of endogenous IFNβ.
We intrathecally administered an adeno-associated virus (AAV) vector encoding a fusion protein (CM) comprising RIG-I 2CARD domains and the first 200 amino acids of MAVS (AAV-CM) [9]. CM acts downstream of RIG-I and MAVS to induce a plethora of immunoregulatory mediators, including IFNβ [9]. We used this AAV approach to ask whether overexpression of CM in the CNS induces IFNβ and how this would exert a therapeutic effect against EAE.
Our findings show that intrathecal treatment with CM induced IFNβ in extraparenchymal blood-derived cells, choroid plexus and astrocytes, and suppressed EAE in an Interferonα/β receptor 1 (IFNAR1)-dependent manner. We link the protective action of CM to its ability to induce CNS-endogenous IFNβ via stimulation of previously unexplored RIG-I/MAVS intracellular signaling pathway in the CNS. Our findings suggest that targeting such signaling pathways can be exploited for the development of novel therapeutic approaches for inflammatory CNS diseases, such as MS.
Intrathecal AAV-CM Induced IFN Beta Response in the CNS
We examined whether intrathecal administration of AAV-CM induces IFNβ in the CNS of mice that express a luciferase gene under control of the IFNβ promoter [10]. AAV-CM and AAV-GFP were injected intrathecally, and luciferase activity was measured at 1-, 3-, 7-and 21-days post administration. In vivo imaging revealed that a single intrathecal injection of AAV-CM resulted in a significant increase of IFNβ expression within the CNS at 1-, 3-and 7-days post injection ( Figure 1A). Furthermore, IFNβ expression remained detectable for an additional 14 days post intrathecal injection ( Figure 1A). AAV-CM-induced IFNβ response was dose-dependent ( Figure 1A). As expected, intrathecal AAV-GFP did not induce IFNβ at any time point.
In order to investigate the localization and cellular sources of IFN-β in response to AAV-CM, IFNβ/yellow fluorescent protein (YFP) knock-in mice were used [11]. C57BL/6 mice were used for the detection of GFP in AAV-GFP-treated mice. Brains were examined for IFNβ or GFP expression at 1-, 3-, and 7-days post administration [5]. Double immunostaining showed IFNβ colocalization with CD45+ cells that were distributed in the leptomeningeal space at 1 day post AAV-CM treatment ( Figure 1B, Arrows). CD45expressing cells were more abundant in mice that had received AAV-CM compared to AAV-GFP-treated mice at 1 day post injection. At this time point, we observed a few leptomeningeal CD45+ cells that colocalized with GFP in intrathecally AAV-GFP-treated mice, indicating that CD45+ cells expressed the vector (Figure 1 insert in B, Arrow).
At both 3-and 7-days post intrathecal AAV-CM treatment, we observed only a few CD45+ cells in the leptomeningeal space. IFNβ+ cells at 3-and 7-days post AAV-CMtreatment were mainly found in the choroid plexus and in the perivascular area where they co-colocalized with GFAP+ astrocytes ( Figure 1C,D). At these time points, GFP+ cells were also observed in the choroid plexus and in the perivascular area, where they were also co-colocalized with GFAP+ astrocytes (insert in Figure 1C,D, Arrows). Together, these findings show that AAV-CM infected leptomeningeal CD45 + cells, choroid plexus and astrocytes, and induced their expression of IFNβ. IFNβ expression in the brain was evaluated in luciferase reporter mice by in vivo imaging and showed that intrathecal administration of AAV-CM induced IFNβ in the CNS of mice was dose-dependent. In vivo imaging of IFNβ/luciferase reporter mice that received intrathecal AAV-CM. The level of IFNβ was significantly increased at 1-, 3-and 7-days post injection and was detectable up to 21 days post injection. n = 3-4 in each group. B-D) Micrographs of brain sections from mice that received AAV-CM or AAV-GFP intrathecally. Nuclei were stained with DAPI (blue). (B) Co-localization (arrows) of IFNYβ/YFP + (green) and extraparenchymal CD45 + cells (red) in mice are shown. Insert shows the co-localization of GFP+ (green) and extraparenchymal CD45 + cells. (C) IFNYβ/YFP + (green, arrow) or GFP+ cells (green in insert, arrow) in the choroid plexus. (D) Co-localization (arrows) of IFNYβ/YFP+ (green) or GFP+ cells (green in insert) and GFAP+ (astrocytes) in the perivascular area of the lateral ventricle. Data are presented as mean ± SEM. The results were analyzed using the two-tailed Mann-Whitney utest. * p < 0.05, Scale bars. 10 µm.
Intrathecal AAV-CM Treatment Enhanced CNS Recruitment of Myeloid Cells
The results from immunostaining suggested that AAV-CM induces CNS recruitment of CD45+ cells, including polymorphonuclear cells, at 1 day post treatment. To examine this further, mice were administered AAV-CM or a control vector by intrathecal injection via cisterna magna, and CNS tissues were analyzed by flow cytometry at 1 day post injection. Blood-derived myeloid cells were distinguished from microglia by CD45 high versus CD45 dim discrimination ( Figure 2A). We found that AAV-CM induced a significant increase in the proportion of CD45 high cells in CNS tissues (Figure 2A), similar to that observed by immunostaining ( Figure 1B). Moreover, AAV-CM induced significant recruitment of myeloid cells (CD45 hi CD11b hi ), including monocytes (CD45 hi CD11b hi GR1 low/-F4/80 + ) and granulocytes (CD45 hi CD11b hi GR1 hi F4/80 − ) ( Figure 2A). Initial studies showed that AAV-CM-induced infiltration of CD45 high cells response was dose-dependent (supplementary Figure S1). As expected, low numbers of CD45 high cells were detected in the AAV-GFP-treated control (Ctrl) mice.
These findings show that AAV-CM induces the recruitment of myeloid cells into the CNS. Therefore, we isolated RNA from CNS tissue for the RT-qPCR analysis of chemokines that recruit myeloid cells. Intrathecally, AAV-CM-treated mice showed significant upregulation in mRNA levels of CCL2, CXCL10 and CXCL2, involved in monocyte and neutrophil recruitment, at 1 day post dose ( Figure 2B). The anti-inflammatory cytokine IL-10, which is induced by IFNAR1-and RIG-I signaling, was also upregulated in response to AAV-CM ( Figure 2B).
Intrathecal AAV-CM Treatment Enhanced CNS Recruitment of Myeloid Cells
The results from immunostaining suggested that AAV-CM induces CNS recruitment of CD45+ cells, including polymorphonuclear cells, at 1 day post treatment. To examine this further, mice were administered AAV-CM or a control vector by intrathecal injection via cisterna magna, and CNS tissues were analyzed by flow cytometry at 1 day post injection. Blood-derived myeloid cells were distinguished from microglia by CD45 high versus CD45 dim discrimination ( Figure 2A). We found that AAV-CM induced a significant increase in the proportion of CD45 high cells in CNS tissues (Figure 2A), similar to that observed by immunostaining ( Figure 1B). Moreover, AAV-CM induced significant recruitment of myeloid cells (CD45 hi CD11b hi ), including monocytes (CD45 hi CD11b hi GR1 low/-F4/80 + ) and granulocytes (CD45 hi CD11b hi GR1 hi F4/80 − ) ( Figure 2A). Initial studies showed that AAV-CM-induced infiltration of CD45 high cells response was dose-dependent (supplementary Figure S1). As expected, low numbers of CD45 high cells were detected in the AAV-GFP-treated control (Ctrl) mice.
These findings show that AAV-CM induces the recruitment of myeloid cells into the CNS. Therefore, we isolated RNA from CNS tissue for the RT-qPCR analysis of chemokines that recruit myeloid cells. Intrathecally, AAV-CM-treated mice showed significant upregulation in mRNA levels of CCL2, CXCL10 and CXCL2, involved in monocyte and neutrophil recruitment, at 1 day post dose ( Figure 2B). The anti-inflammatory cytokine IL-10, which is induced by IFNAR1-and RIG-I signaling, was also upregulated in response to AAV-CM ( Figure 2B).
Intrathecal AAV-CM Suppressed EAE in an IFNAR-Dependent Manner
We asked whether induction and prolonged production of endogenous IFNβ would exert a therapeutic effect against EAE. To investigate this, C57BL/6 mice were immunized with MOG p35-55 and randomized on the day of disease onset, which in all cases was the loss of tail tonus (grade 2). Mice were administered AAV-CM, AAV-GFP or PBS into the cisterna magna and evaluated for clinical symptoms over the following ten days. The mean clinical score showed a significant increase in AAV-GFP or PBS-treated mice from 1 day to 5 days but did not change in mice that received intrathecal injection of AAV-CM ( Figure 3A). Importantly, the disease modulatory effect of intrathecal AAV-CM was abrogated in IFNAR1-deficient mice, in which disease symptoms in AAV-CM-treated mice worsened similarly to those in AAV-GFP-or PBS-treated mice ( Figure 3B). For ethical reasons, mice were sacrificed when they reached grade 5 or if hind limb paralysis persisted for 2 days. Intrathecal AAV-CM recruits myeloid cells to the healthy CNS. (A) Flow cytometric gating strategy to distinguish CD45 high leukocytes from CD45 dim microglia, and CD11b high , macrophages/monocytes (CD45 hi CD11b hi GR1 low/-F4/80 + ), and granulocytes (CD45 hi CD11b hi GR1 hi F4/80 -). The proportion of CD45 high , CD11b high , monocytes and granulocytes were significantly increased in the CNS tissues of mice upon intrathecal AAV-CM treatment (n = 4-6 per group). (B) RT-qPCR analysis of brains showed CCL2, CXCL2, CXCL10 and IL-10 significantly induced upon intrathecal AAV-CM treatment at 1 day post dose (n = 4-6). control (ctrl). Data are presented as mean ± SEM. The results were analyzed using the two-tailed Mann-Whitney u-test. * p < 0.05, ** p < 0.01.
Intrathecal AAV-CM Suppressed EAE in an IFNAR-Dependent Manner
We asked whether induction and prolonged production of endogenous IFNβ would exert a therapeutic effect against EAE. To investigate this, C57BL/6 mice were immunized with MOG p35-55 and randomized on the day of disease onset, which in all cases was the loss of tail tonus (grade 2). Mice were administered AAV-CM, AAV-GFP or PBS into the cisterna magna and evaluated for clinical symptoms over the following ten days. The mean clinical score showed a significant increase in AAV-GFP or PBS-treated mice from 1 day to 5 days but did not change in mice that received intrathecal injection of AAV-CM ( Figure 3A). Importantly, the disease modulatory effect of intrathecal AAV-CM was abrogated in IFNAR1-deficient mice, in which disease symptoms in AAV-CM-treated mice worsened similarly to those in AAV-GFP-or PBS-treated mice ( Figure 3B). For ethical reasons, mice were sacrificed when they reached grade 5 or if hind limb paralysis persisted for 2 days.
Intrathecal AAV-CM Altered Inflammatory Programs in the CNS of Mice with EAE
We asked whether and how intrathecal AAV-CM would impact the infiltration of immune cells into the CNS using flow cytometry analysis in mice with EAE. The results showed that the percentages of CD45 high cells were not different between the control and AAV-CM-treated mice ( Figure 4A). However, further analysis of CD45 high CD11b+ cells revealed that the proportions of infiltrating myeloid cell populations, including neutrophils, were significantly increased in CM-treated mice ( Figure 4A), whereas the proportions of other populations that were examined remained unchanged (not shown).
The spinal cords of C57BL/6 mice with EAE were examined for demyelination and infiltration ( Figure 4B). LFB staining revealed a loss of myelin in the parenchyma of the spinal cord in the control EAE mice. Cresyl violet staining showed infiltrating cells in the corresponding parenchymal areas in the control mice. Myelin loss was reduced by AAV-CM treatment, and infiltration in spinal cord sections was predominantly extraparenchymal in the meninges ( Figure 4B).
To assess how the activation of RIG-I and IFNAR downstream signaling influenced CNS inflammatory programs in mice with EAE, we examined the expression of inflammation-associated mediators in response to AAV-CM using RT-qPCR. We found that levels of IFNα-, IFNβ-, IL10-, IL-1 and IFNγ mRNA, as well as IFNAR associated downstream signaling IRF7, IRF9 were significantly elevated in CNS tissue from AAV-CMtreated mice at 1 day post dose ( Figure 4C).
Intrathecal AAV-CM Altered Inflammatory Programs in the CNS of Mice with EAE
We asked whether and how intrathecal AAV-CM would impact the infiltration of immune cells into the CNS using flow cytometry analysis in mice with EAE. The results showed that the percentages of CD45 high cells were not different between the control and AAV-CM-treated mice ( Figure 4A). However, further analysis of CD45 high CD11b+ cells revealed that the proportions of infiltrating myeloid cell populations, including neutrophils, were significantly increased in CM-treated mice ( Figure 4A), whereas the proportions of other populations that were examined remained unchanged (not shown).
The spinal cords of C57BL/6 mice with EAE were examined for demyelination and infiltration ( Figure 4B). LFB staining revealed a loss of myelin in the parenchyma of the spinal cord in the control EAE mice. Cresyl violet staining showed infiltrating cells in the corresponding parenchymal areas in the control mice. Myelin loss was reduced by AAV-CM treatment, and infiltration in spinal cord sections was predominantly extraparenchymal in the meninges ( Figure 4B).
To assess how the activation of RIG-I and IFNAR downstream signaling influenced CNS inflammatory programs in mice with EAE, we examined the expression of inflammationassociated mediators in response to AAV-CM using RT-qPCR. We found that levels of IFNα-, IFNβ-, IL10-, IL-1β and IFNγ mRNA, as well as IFNAR associated downstream signaling IRF7, IRF9 were significantly elevated in CNS tissue from AAV-CM-treated mice at 1 day post dose ( Figure 4C).
Discussion
In this study, we have shown that targeting the signaling pathway downstream of RIG-I and MAVS stimulated the production of endogenous IFNβ in the CNS and exerted a therapeutic effect against EAE in an IFNAR1-dependent manner. Histopathology of control mice with EAE showed infiltrating cells in the spinal cord white matter, as well as myelin loss in corresponding areas. In contrast, infiltrating cells were predominantly found in the meninges of the spinal cord and myelin loss was minimal upon AAV-CM treatment. In the control group of mice, Cresyl violet staining showed cell infiltration into the parenchyma of the spinal cord (arrows), which was correlated with extensive loss of LFB (marked area) in corresponding areas. Mice with EAE that were treated with intrathecal AAV-CM showed cell accumulation in the meninges and reduced loss of LFB staining. Scale bar. 100 µm. (C) RT-qPCR analysis of brains showed IFNα, IFNβ, IFNγ, IL-10, IL-1β, IRF7 and IRF9 were significantly (STAT1, p < 0.0565) induced upon intrathecal AAV-CM treatment at 1 day post dose (n = 6-8). Data are presented as mean ± SEM. The results were analyzed using the two-tailed Mann-Whitney u-test. * p < 0.05, ** p < 0.01.
Discussion
In this study, we have shown that targeting the signaling pathway downstream of RIG-I and MAVS stimulated the production of endogenous IFNβ in the CNS and exerted a therapeutic effect against EAE in an IFNAR1-dependent manner. Histopathology of control mice with EAE showed infiltrating cells in the spinal cord white matter, as well as myelin loss in corresponding areas. In contrast, infiltrating cells were predominantly found in the meninges of the spinal cord and myelin loss was minimal upon AAV-CM treatment.
Immunohistological and flow cytometry analysis showed the recruitment of CD45+ cells to the CNS in healthy mice upon AAV-CM administration. CD45 + cells included monocytes and granulocytes. Activation of innate receptors, including RIG-I signaling, has been shown to induce CCL2 and CXCL2, monocyte-and neutrophil-chemoattractants [12]. Here, we have demonstrated that both chemokines were upregulated in response to AAV-CM, supporting the idea that CM activated downstream RIG-I and MAVS signaling.
We have previously shown that the recruitment of immune cells into the CNS in response to innate receptor activation can protect against EAE, and that infiltrating CD45+ cells are a source of IFNβ [4,5]. In contrast to our previous work, where we observed a transient therapeutic effect on EAE by a single intrathecal injection of innate ligands [4,5,13], in the present study, there was a prolonged IFNβ expression as well as sustained protection against EAE.
Luciferase activity, in AAV-CM-treated IFNβ/luciferase reporter mice, was significantly increased during the first 7 days and was detectable for a further 14 days. A study by Aschauer et al. (2013) demonstrated that AAV8 effectively transduces cells of the CNS, particularly astrocytes [14]. Similarly, Pignataro and colleagues showed high AAV8 transduction efficiency within CNS tissue, including astrocytes and oligodendrocytes [15]. Our experiments showed co-localization of GFP with CD45 cells, cells of the choroid plexus, and astrocytes. As AAV8-CM and AAV8-GFP should transduce the same cells, our findings suggest that AAV-CM infected leptomeningeal CD45+ cells, choroid plexus and astrocytes, and induced their expression of IFNβ.
The RT-qPCR analysis showed increased levels of IFNβ and IL-10 in the CNS tissues of mice with EAE. Both IL-10 and IFNβ are known to play critical roles in the regulation of EAE and have been shown to contribute to the anti-inflammatory environment in the CNS [5,16]. IFNβ promotes immunosuppressive activity of myeloid cells [17] and it has been shown that type I IFN can drive the expression of IFNγ, which was also upregulated in our study [18,19]. Although normally considered pro-inflammatory, immunomodulatory roles for IFNγ have been proposed [20]. In our study, we found the levels of IFNα/β and IL10 to be increased more than IFNγ, suggesting that AAV-CM treatment shifted the inflammatory response toward a protective response.
Moreover, dysregulation of IL-10 has been associated with an enhanced risk for the development of autoimmune diseases [21]. Our previous study showed that neutrophils are one source of IL-10 [4]. Accumulating evidence suggests that neutrophils can acquire a suppressive phenotype under certain conditions and contribute to the regulation of inflammation [22]. We have previously shown that suppressive neutrophils can transfer protection against EAE [4]. In the present work, neutrophils were observed to be significantly increased in mice with EAE when treated with intrathecal AAV-CM; however, we did not examine whether the neutrophils in the present study contributed to the observed disease amelioration.
RIG-I recognizes single-stranded RNA. The downstream signaling of RIG-I involves MAVS and the activation of NF-κB, which regulates the expression of cytokines and chemokines including IFNβ and CXCL10 [23,24]. Accordingly, in our study, the level of CXCL10 increased in response to CM, suggesting the involvement of the NF-κB pathway in CM-induced CXCL10 expression. It has been shown that NF-kB-deficient mice are resistant to EAE, and activation of the NF-κB pathway exacerbates EAE [25,26]. Consistent with this, neutralizing IFN-inducible CXCL10 has been shown to exacerbate EAE [27] and increased EAE susceptibility has been observed in CXCL10-deficient mice [28].
The RT-qPCR analysis showed increased levels of message for IFNα, IRF7 and IRF9 in the CNS tissues of mice with EAE, indicating activated type I IFN-IFNAR signaling [29] Supporting this, the therapeutic effect of CM-induced type I IFN was absent in mice that lack IFNAR1 signaling.
EAE Induction
C57BL/6 and IFNAR1-KO mice were immunized as described previously [30] with 100 µL emulsion containing 100 µg myelin oligodendrocyte glycoprotein (MOG)p35-55 (TAG Copenhagen A/S, Denmark) in complete Freund's adjuvant (BD Biosciences, Sparks, NV, USA) with 200 µg heat-inactivated Mycobacterium tuberculosis (BD Biosciences) injected subcutaneously into each hind flank. Mice received an intraperitoneal injection of Bordetella pertussis toxin (0.3 µg, Sigma-Aldrich, Brøndby, Denmark) at the time of immunization and 1-day post immunization. Mice were monitored daily for loss of body weight and EAE symptoms. The EAE grades were defined as follows: grade 0, no signs of disease; grade 1, weak or hooked tail; grade 2, floppy tail indicating complete loss of tonus; grade 3, floppy tail and hind limb paresis, grade 4: floppy tail and unilateral hind limb paralysis; grade 5, floppy tail and bilateral hind limb paralysis.
Intrathecal Injection
Mice were anesthetized by inhalation of isoflurane (Abbott Laboratories), and a 30gauge needle (bent 55 • with a 2 mm tip) attached to a 50 µL Hamilton syringe was used to perform an intrathecal injection of ssAAV8-EF-CARD-MAVS (AAV-CM) or ssAAV8-EF-Stuffer-eGFP-WPRE (AAV-GFP) [9] or phosphate-buffered saline (PBS). To determine an optimal dose for induction of IFNβ, mice received two different doses of AAV-CM. AAV-CM at a dose of 2.5 × 10 10 or 2.5 × 10 8 viral particles (vp)/animal, in a total volume of 10 µL, into the intrathecal space of the cisterna magna into the cerebrospinal fluid. The optimal dose for AAV-CM was determined based on the strongest induction of IFNβ expression in luciferase reporter mice by in vivo imaging. The results showed that the optimal dose for AAV-CM is 2.5 × 10 10 vp/mouse and was used throughout the study. The dose of AAV-GFP was accordingly chosen at 2.5 × 10 10 vp/mouse to match the AAV-CM dose.
Tissue Processing
Mice were euthanized with an overdose of sodium pentobarbital (100 mg/kg, Glostrup Apotek, Glostrup, Denmark) and perfused with ice-cold PBS. For flow cytometry, CNS tissue was placed in ice-cold PBS. For histology, CNS tissue was post-fixed with 4% paraformaldehyde (PFA), immersed in 30% sucrose in PBS, then frozen and 16 µm thick tissue sections were cut on a cryostat (Leica, Copenhagen, Denmark). For reverse transcriptasequantitative polymerase chain reaction (RT-qPCR), CNS tissues were placed in 0.5 mL TriZol Reagent (Ambion, Denmark) and stored at −80 • C until needed for RNA extraction [13].
Flow Cytometry
A single-cell suspension was obtained by forcing the CNS tissue through a 70 µm cell strainer (Falcon, Teterboro, NJ, USA) with Hank's buffered salt solution (HBSS, Gibco, Waltham, MA, USA) supplemented with 2% fetal bovine serum (FBS, Merck, Darmstadt, Germany). Myelin was cleared by resuspending cells in 37% Percoll (GE Healthcare Biosciences AB, Uppsala, Sweden) in a buffer consisting of 45 mL 10x PBS, 3mL HCI, 132 mL water, pH 7.2, followed by centrifugation at 2500× g for 20 min at RT. The myelin layer was removed, and the cell pellet was washed. Cells were incubated in a blocking solution
Statistical Analysis
The Rout test (Q = 1) was used to estimate significant outliers that were removed before further statistical testing. Data were tested for normal distribution and analyzed by a two-tailed non-parametric Student's t test, followed by the Mann-Whitney test. All statistical analyses were performed using GraphPad Prism version 9 (Graphpad Software Inc., San Diego, CA, USA). The results are presented as means ± SEM. Values of p < 0.05 were considered significant.
Conclusions
We have demonstrated that sustained CNS-endogenous IFNβ production via intrathecal administration of AAV-CM promotes protection against EAE. Our results provide a basis for further studies aiming at elucidating the mechanism of sustained IFNβ signaling and protection in CNS inflammatory diseases. | 5,670 | 2022-09-25T00:00:00.000 | [
"Biology"
] |
Experimental Study on the Damage of Optical Materials by Out of Band Composite Laser
: For the paper, experimental studies were performed on the damage of the Ge- and Si-based flat window by lasers out-of-band. The experimental results showed that lasers out-of-band can cause film damage and substrate damage to Ge and Si windows. The high-energy laser damage window mechanism mainly manifested as thermal effects. The composite laser damage thresholds for the substrate were an Si window of 21.6 J/cm 2 and a Ge window of 3 J/cm 2 . Compared with continuous laser and long pulse laser experimental results, it was found that the use of long pulse-continuous composite constitution could effectively reduce the damage threshold. Compared to the long-pulse laser, the composite laser could achieve similar damage effects with a smaller energy density.
Introduction
Laser-induced damage to optical components is a key research issue in high-energy laser emission systems, and it is also one of the key technologies that need to be resolved for the development of high-power optoelectronic countermeasure systems. Starting from the basic principle of the interaction between lasers and matter, a laser can interact with optical systems and optical elements through the laser thermal effect and laser-electron interactions. This provides a theoretical basis for a single-band laser to achieve full-band photoelectric loading. Based on this principle, researchers have proposed the concepts of "in-band damage" and "out-band damage." "In-band damage" refers to the damage of an optoelectronic system by a laser in its operating band. Researchers generally believe that the damage mechanism of "in-band damage" comprises the semiconductor band structure theory, thermoelectric effect, etc., and "in-band damage" has been widely used in contemporary optoelectronic countermeasures. "Out-of-band damage" refers to the damage of photoelectric systems by lasers outside the operating band. Earlier studies have suggested that optoelectronic components do not respond or respond weakly to the lasers outside the operating band. However, with the advancement of laser technology, more and more high-power and high-energy lasers have begun to be applied to high-power laser emission systems. This has caused the risk of damage to optical systems by lasers outside the high-energy band. Therefore, it is necessary to carry out systematic experimental research on the interaction between high-energy lasers outside the band and optical elements.
At present, the research reports on "out-band damage" have mainly focused on photodetectors. The related literature has conducted experimental studies on interference and damage of HgCdTe, InSb, and Si-CCD (charge coupled device) detectors [1][2][3][4][5][6][7]. The mainstream view believes that the mechanism of "out-band damage" is mainly the semiconductor band structure theory (CCD) and thermoelectric effects (Mid-wavelength infrared and Long-wavelength infrared). Some researchers have found that out-of-band lasers can also damage window mirrors in experiments of laser interference effects on optical systems. Wang [8] used a Deuterium fluoride laser to perform cumulative damage experiments on a visible light plane array CCD and found that multiple irradiations of the laser at different positions on the CCD surface and multiple irradiations at the same position damaged the K9 (A glass model) optical window. Wang [9] experimented with a 3.8 μm continuous-wave laser to destroy the ternary Photoconductive type HgCdTe detector system and found that the film and substrate damage occurred in the Ge window at the laser irradiation spot; they also found the internal filter had a melting phenomenon. Existing research believes that the key to "out-band damage" lies in whether the laser source has a sufficient damage ability, and the multi-mode composite laser has this characteristic. A multi-mode composite laser consists of lasers with different wavelengths, different systems, and different frequency changes that act on the target at the same time or alternately to obtain a better damage effect than a single continuous-wave or Pulsed-laser. Related studies have been conducted on composite lasers: Cheng [10] carried out an experimental study on the combined damage of a 1030 nm continuous laser and a 1064 nm pulsed laser and found that the "non-linear avalanche ablation" effect occurred under the combined or alternating effects of two lasers, which made the combined laser have a stronger ablation effect than the pulse laser. The experimental results showed that the average single-pulse ablation amount of the composite laser was 13 times that of the pulse laser. Wang [11] found, in a study of pulse-pulse composite lasers, that the increase in target damage was a result of an increased power density. The damage effect of composite lasers is related to the overlap of pulse time domains, and the damaging effect of composite lasers is better than that of long-pulse lasers. Jiao [12], using 1053 nm pulsed and 1064 nm multiple compound lasers to study the irradiation effect of steel found that the surface reflection of steel decreased with the increase of pulsed laser frequency. The pre-irradiation of steel plates with a long pulse laser can increase its absorption rate for subsequent lasering. Xiao [13,14] simulated the thermodynamic characteristics of the continuous-pulse composite laser irradiation of aluminum alloys. Through simulation, it was found that the composite laser could significantly increase the size of the molten pool and increase the center temperature of the irradiation spot. The longer the "preheating" time, the shorter the yield time of the material, the larger the plastic deformation, and the larger the yield range.
According to the analysis of the above literature, it is known that the research on the laser irradiation effects of composite lasers is still in its infancy. Almost all reported studies have used low-energy in-band laser sources. The research targets have mostly been photoelectric sensors and metal structural parts. Composite laser damage studies on optical components have not been reported. For this paper, high-power laser damage experiments were performed on a common Ge-based and Si-based flat window to provide technical support for the design of high-power laser emission systems.
Absorption and Scattering of Optical Film
A flat window is composed of surface optical films and substrates. The following assumptions are used to solve the reflectivity of optical films: The film does not absorb incident light, its refractive index is uniform, and both interfaces are smooth. These assumptions accurately reflect the optical properties of general dielectric films. An optical film made of a single-layer film has a limited optical performance. To meet a variety of optical requirements, multi-layer films are needed. Suppose an optical film composed of an m-layer film-then, the j-layer has a refractive index of nj, and the feature matrix of this layer of film can be expressed as: where δj = (2r/λ)njdj is the phase thickness of the j-th layer film. The combined feature matrix of the m-layer film is: where n0, ns, and λ are the refractive index of the incident medium, the refractive index of the substrate, and the laser wavelength, respectively. For transparent dielectric films, m12 and m21 are pure imaginary numbers, and m1 and m22 are pure real numbers. The reflection coefficient r and reflectance R of the multilayer film are: The reflectivity of the optical film can be solved by the above two equations [15][16][17].
Temperature Field of Optical Film
The direct factor leading to the destruction of the optical film under laser irradiation is the temperature rise caused by the thermal effect. To study the laser damage of optical film, the time and space distribution of the temperature field of optical film should be considered, and these can be determined by solving the heat conduction equation with the laser source term. The heat conduction equation of a multilayer dielectric film and its initial and boundary conditions can be expressed as: Boundary conditions: r→∞ or z→∞:T→0 Initial conditions: Ti(r,z,0) = T0(r,z) Among them, ci and ki are the specific heat capacity and thermal conductivity of the i-th layer film, respectively; H is the surface heat exchange constant, which is related to the surface thermal radiation and thermal convection; and gi is the laser energy deposited in the i-th film: where Pi(r,t,z) is the Poynting vector of the i-th film. The energy transmission between the membrane layers meets the principles of temperature continuity and heat flow balance (i = 1, 2, …) leads to: where Ri is the thermal resistance at the interface between the i and i + 1 films, zi and zi+1 are the positions of the interface, and Ti and Ti+1 are the temperatures of the two films at the corresponding interfaces. Solving the equations provides the temperature distribution of the optical film [17][18][19][20].
Optical Element Substrate Thermal Response
The steady system assumes that the semiconductor material is crystal and its internal heat conduction may be anisotropic. The crystal structure of semiconductor materials such as silicon germanium is a cubic crystal system, and the thermal conductivity of this crystal structure is the spherical tensor. It can be considered that the specificity of heat conduction in the same characteristic material is the same [21,22].
The expression of the heat flux density vector can be obtained from Fourier's law: where Kij is the thermal conductivity coefficient component; xj (j = 1,2,3) is the coordinate in the xyz direction; and ei is the basis vector in the xyz direction. Bring Equation 1 into the heat flux density expression to obtain the heat conduction equation in the semiconductor material: where ρ is the density and cp is the specific heat capacity at constant pressure. The heat flux density of the laser incident surface is: The incident laser has a Gaussian distribution, as follows: where I0 is the incident power density, r is the distance from the reference point to the center of the spot, and w is the spot radius.
Convection heat transfer on the rear surface of the target is expressed as: According to Equations (8)- (12) can solve the temperature distribution T(x, t) of a flat window under the action of a laser [23][24][25]. Due to the non-linear changes in the actual physical parameters, the temperature field is also unstable. Therefore, the finite element calculation method can be used to calculate the temperature distribution of a flat window under the action of a laser over time.
Experimental Study of Composite Laser Damaged Flat Window
A composite laser damage effect experimental platform was set up, as shown in Figure 1. The continuous and pulsed laser triggering was controlled by a digital delay pulse generator, and the laser parameters were monitored online by a spectroscope. Before starting the experiment, we used the coaxial indicating light to fix the spot position on the flat window. When ready, we used the digital delay pulse generator to control laser delay and trigger. During the experiment, the laser status was monitored with a beam quality analyzer and a power meter. The laser irradiation time was 5 s, and the average value was recorded by repeating the experiment ten times for each power and energy value. The experimental observation system consisted of a visible high-speed camera, an infrared thermal imager, and a plasma spectrometer. We determined whether the window was damaged according to the temporal and spatial distribution of the spot collected by the plasma spectrometer. We used the photodetector to monitor the window mirror for body damage. The model of the used CW laser was of the fundamental mode with a Gaussian distribution. The model of the used pulsed laser was dominated by the fundamental mode that was mixed with some higher-order modes. The laser parameters are shown in Table 1. The targets used in the experiments were silicon-based and germanium-based flat window. The substrate material parameters are shown in Table 2. First, two kinds of window were used to measure the absorptivity. We used a 1.06 um continuous laser to irradiate the flat window and measuring the reflected power with a power meter, since the two substrates were not transparent to the Near-Infrared band; as such, the absorptivity shown in Table 3 could be obtained.
Continuous Laser Damage Flat Window Experiment
The continuous laser was used to verify the damage effect of the out-of-band laser on the flat window. Due to thermal stress deformation caused by laser irradiation, the spot of collimated light was shifted. The thermal deformation state of the flat window under continuous laser loading was observed with a scattering detection method-that shown in Figure 2a is before loading, that shown in Figure 2b is the spot shift caused by thermal distortion during loading, and that shown in Figure 2c is the spot shift before the lens body exploded.
Figure 2. Changes in scattered light spots caused by thermal distortion. (a) Irradiation begins (b)
Start to deform (c) Body explode By comparing the experimental results, it could be seen that the film color of the germanium window changed from 600 to 660 °C, and ripples were generated (Figure 3). At 660 °C, the film began to melt, and optical coating damage occurred. At 700 °C, substrate damage began. When the laser power reached 1500 W, the window body exploded. The film damage of the silicon window began to occur at 500 °C (Figure 4), and the damage of the film was similar to that of the germanium window mirror. When the laser power reached 1440 W, the substrate of the silicon window exploded at 950 °C. When the laser was irradiated to the flat window, due to the asymmetrical distribution of the high-power laser beam and non-uniform absorption, the temperature gradient and thermal stress in the material were caused. The larger temperature gradient and thermoelastic stress first appeared in the irradiated area, which made the window material generate thermal distortion. When the thermal stress gradually accumulated and exceeded the strength limit of the material, the irradiated area first burst, eventually causing macro damage to the window [26,27]. It can be seen from Table 4 that the damage thresholds of the NIR lasers out of band on the Ge and Si windows were close. The reason for the body damage was that the substrate absorption rate increased after the optical coating melted and the temperature rose became faster, which caused substrate bursting. The damage of the optical film system was mainly caused by melting. It could be seen that the film system disappeared and the substrate was severely ablated when observed in the central area of irradiation. This was because when continuous laser light irradiated the surface of the window mirror, the thermal stability of the substrate material was better than that of the film. The thermal stress when the substrate was heated to 550-600 °C was not enough to cause the body damage of the window, but the optical film had begun to melt and be destroyed.
Long-Pulse Laser Damage Experiment
A long-pulse laser damage flat window experiment was performed. The absorption rates of a 1.03 μm laser for flat windows were measured. The measurement results are shown in Table 5. The absorption rate of the Si window for lasers in the 1.03 μm band was higher than that of the 1.06 μm band, and the absorption rate of the Ge window for lasers in the 1.03 μm band was slightly lower than the 1.06 μm band. We performed a S-on-1 single-pulse experiment, loaded five pulses, and observed the experimental results, as shown in Table 6 below. Table 6. Experimental results of the long-pulse laser. According to Table 6, when the laser energy reached 29 J, the Si window had coating damage, and at the same time, the plasma flash produced by the breakdown of electrons appeared in the irradiation area [28]. With the increase of energy, the damage of the optic film became more serious. The damage phenomenon is shown in Figure 5 and Figure 6.When the energy reached 44 J, substrate damage occurred. When the energy reached 80 J, the first pulse caused substrate damage. When the laser energy reached 11 J, coating damage occurred in the Ge window. The coating damage became more serious at 17-23 J, and when the energy reached 29 J, the first pulse caused damage to the substrate. The long-pulse laser and continuous laser irradiation on windows were mainly caused by surface thermal fusion damage, and plasma absorption was the main cause of damage.
Short-Pulse Laser Damage Experiment
A short-pulse laser damage window experiment was performed using the 1-on-1 method. The experimental results are shown in Table 7. Observing the damage phenomenon, it was found that, unlike the long-pulse damage, the coating damage caused by the short-pulse laser was shallower than that of the long pulse. Under the experimental conditions of this paper, only the Ge window showed slight substrate damage at the laser energy of 160 mj, though it showed no body damage (Figure 7 and Figure 8). This was because the short-pulse laser damaged the material for a short time, mainly due to the effect of the laser electric field, so it mostly appeared as film damage. Non-linear effects (such as non-linear absorption, non-linear refractive index, and self-focusing effects) induced by high electric fields when the high-power lasers acted on the windows were the main causes of damage. Since the laser intensity in the short-pulse laser irradiation area reached the order of GW/cm 2 , the self-focusing effect appeared. By combining the above three experimental results, it could be seen that short-pulse lasers outside the band could only cause mild film damage, while long-pulse lasers could cause body damage.
Composite Laser Damage Experiment
According to the results of the pulse laser damage to the window, although the short-pulse laser had a higher peak power, it did less damage to the window and had difficulty damaging the substrate. Therefore, this paper used a combination of a long-pulse and a CW laser to perform the damage experiment. The trigger time of the long-pulsed laser was controlled by digital delay pulse generator. When the current temperature of the thermal imager reached a preset value, the CW laser was turned off and the pulsed laser was triggered at the same time. The results of the composite laser experiment are shown in table 8 and table 9 below. The damage phenomenon is shown in Figure 9 and Figure 10. It could be seen from the experimental results of the Ge window that the composite laser could cause slight damage to the film system at the laser energy of 2 J and 145 °C. The results of the Si window experiments showed that the composite laser started to damage the optic film system at 17 J. When the pulsed laser energy is less than 17j, the continuous laser pre-heating temperature needs to exceed 300 ℃ to cause mild film damage ,which was consistent with the long-pulse laser damage mechanism discussed above. Increasing the continuous laser preheating temperature while maintaining the pulse laser energy at 17 J led to more serious optic film damage , and the substrate damage was caused when the preheating temperature reached 260 °C. Table 10 summarizes the damage threshold data for different lasers. The giant-pulse laser waveform used in this experiment was of the spike and square mixed waveform. The energy of the spike pulse as about three-to-four times that of the square pulse. After the flat window was pre-heated by continuous light, the spike pulse part first acted on the window body. Its high peak energy was absorbed by the optical film and substrate. The optical film was instantly melted under the double temperature rise effect, and the residual energy was directly absorbed by the substrate and eventually caused damage to the substrate.
Conclusions
(1). Out-of-band high-energy lasers can damage Ge and Si windows. NIR laser damage to Ge and Si windows with medium wave Anti-Reflection coatings occurred at 500-600 °C The damage thresholds of the NIR continuous laser and the pulsed laser to the Ge window were divided into 2 kW/cm 2 and 37 J/cm 2 , respectively, and the damage thresholds of the NIR continuous laser and pulsed laser to the Si window were divided into 1.8 kW/cm 2 and 65 J/cm 2 . The high-energy laser damage mechanism of long-pulse laser and CW laser is mainly manifested as thermal effect, and it is easier to damage the substrate. The damage mechanism of the short-pulse laser as a high-power laser was mainly manifested by the laser electric field effect, but it was difficult to damage the substrate due to the laser's low energy.
(2). The pulse-continuous composite laser could effectively improve the damaging effect, and the long-pulse laser energy had a significant effect on the damage. The thresholds of the damage to the substrate by the composite laser were 21.6 J/cm 2 for the silicon-based flat window and 3.8 J/cm 2 for the germanium-based flat window. | 4,781 | 2020-05-21T00:00:00.000 | [
"Physics",
"Engineering",
"Materials Science"
] |
The Effect of Cerebral Small Vessel Disease on the Subtypes of Mild Cognitive Impairment
Objectives: Cerebral small vessel disease (CSVD) is the most common vascular cause of dementia, and mild cognitive impairment (MCI) is an intermediate state between dementia and normal cognitive aging. The present study investigated the main imaging features of CSVD on different MCI subtypes in memory clinics. Methods: A total of 236 patients with MCI and 85 healthy controls were included. One hundred nine amnestic MCI-multiple domains (amMCI), 38 amnestic MCI-single domain (asMCI), 36 non-amnestic MCI-multiple domains (namMCI), and 53 non-amnestic MCI-single domain (nasMCI) patients were diagnosed. All participants were evaluated with the cognitive assessments and imaging features including white matter hyperintensity (WMH), enlarged perivascular spaces (EPVS), cerebral microbleeds (CMBs), and cerebral atrophy according to a standard procedure. Results: The patients with amMCI, namMCI, and nasMCI had more high-grade basal ganglia EPVS compared with healthy controls, while the percentages of high-grade basal ganglia EPVS in the patients with amMCI were also more than those in patients with asMCI, namMCI, and nasMCI. There were more high-grade centrum semiovale EPVS in patients with amMCI in comparison with all other groups. The patients with amMCI and namMCI had more percentages of severe deep and periventricular WMH and deep CMBs compared with healthy controls. All MCI groups had higher scores of the medial temporal lobe atrophy than healthy controls, whereas the scores of the amMCI group were also higher than those of the namMCI and nasMCI groups. Conclusions: There were varied neuroimaging features of CSVD including cerebral atrophy in different MCI groups, which meant that vascular mechanism contributed to the prodromal stage of dementia.
INTRODUCTION
Dementia has become an important health problem among the aging population in China with 249.49 million people aged 60 years or older. Overall age-adjusted and sexadjusted prevalence was estimated to be 6.0% for dementia, 3.9% for Alzheimer's disease (AD), and 1.6% for vascular dementia in people aged 60 years or older in China (1).
Mild cognitive impairment (MCI) is considered as an intermediate state between dementia and normal cognitive aging with prevalence of 15.5% and could provide important information about the population at risk for developing dementia (1,2). The concept of MCI was expanded to four subtypes, namely, amnestic MCI-single domain (asMCI), amnestic MCImultiple domains (amMCI), non-amnestic MCI-single domain (nasMCI), and non-amnestic MCI-multiple domains (namMCI), which differ in etiology and outcome. Amnestic MCI (aMCI) is thought to have a high likelihood of progressing to AD, especially amMCI. Non-amnestic MCI (naMCI) is assumed to have a higher likelihood of progressing to a non-AD dementia (2,3). Cerebral small vessel disease (CSVD) is a disorder of the brain's small perforating arterioles, capillaries, and probably venules that causes various lesions that are seen on pathological examination or brain imaging with magnetic resonance imaging (MRI) or computed tomography (CT). Typical CSVD includes white matter hyperintensities (WMH), enlarged perivascular spaces (EPVS), cerebral microbleeds (CMBs), and so on (4)(5)(6). Brain atrophy occurs with the usual aging process with heterogeneous pathological changes. Brain atrophy was thought to have a close relationship with neurodegenerative diseases, but many studies reported an association between atrophy and CSVD (4). Neurodegenerative diseases such as AD commonly coexist with cerebrovascular disease in older people, especially CSVD. CSVD could cause cognitive impairments and is a common cause of dementia, but the relationship has been questioned between CSVD and the various MCI types.
The purpose of the present study was to investigate the main imaging features of CSVD on different MCI subtypes.
Subjects
Three hundred twenty-one subjects were recruited from the memory clinics between 2015 and 2019. All participants underwent routine assessments, including standardized history taking, physical and neurological examinations, necessary laboratory tests, and an MRI scan. Of these participants, 236 of them were diagnosed with MCI and fulfilled the following criteria: (1) cognitive complaint, preferably corroborated by an Abbreviations: AD, Alzheimer's disease; ADL, activities of daily living; aMCI, amnestic mild cognitive impairment; amMCI, amnestic mild cognitive impairment-multiple domains; asMCI, amnestic mild cognitive impairmentsingle domain; BG, basal ganglia; BNT informant; (2) objective cognitive impairment, quantified as a performance score more than 1.5 standard deviation (SD) below the appropriate mean on one cognitive test of any domain; (3) essentially intact activities of daily living (ADL); and (4) no dementia (2,3). Subjects with a score < 1.5 SD from the mean on a learning measure, immediate or delayed recall, or recognition on the Rey Auditory Verbal Learning Test (RAVLT) (7) or immediate or delayed recall on the Rey-Osterrieth complex figure (ROCF) (7) were classified as having aMCI. Subjects with a score < 1.5 SD from the mean on at least one test of attention, executive function, language facilities, and visuospatial capacity, but no memory impairment, were classified as having naMCI. Subjects with aMCI were subclassified as asMCI if they only had impairment in memory and as amMCI if they also had impairments in non-memory domains. Subjects with naMCI were subclassified as nasMCI if they only had impairment in one domain of non-memory domains and as namMCI if they also had impairments in two or above domains of non-memory domains. A total of 109 amMCI, 38 asMCI, 36 namMCI, and 53 nasMCI patients were included in the study.
Eighty-five subjects were considered as healthy controls. The inclusion criteria for the controls were as follows: (1) almost normal cognitive functions verified by informants; (2) the Mini Mental State Examination (MMSE) scores equal to or above 26 (8); (3) intact ADL; and (4) a Clinical Dementia Rating Scale (CDR) score of 0 (9). The exclusion criteria for the patients and controls were severe medical illness, neurological disorder, psychiatric disease, hearing or eyesight loss, and obvious abnormalities visible by cranial MRI. Participants who had been prescribed psychiatric drugs were also excluded.
The objectives of the research were explained to the participants and their families, and written informed consent was obtained for each participant. The research was approved by the Ethics Committee of the Beijing Tiantan Hospital.
Cognitive Assessments
The cognitive assessments were administered by technicians according to a standard procedure and scored by a neuropsychologist. The time required for test administration was ∼90 min.
The test battery included global cognitive screening, attention/processing speed, executive function, memory aptitude, language facilities, and visuospatial abilities. The MMSE and the Montreal Cognitive Assessment (MoCA) (Beijing version) (10) were used for global cognitive screening. The Digit Span Forward subset of the Wechsler Adult Intelligence Test-Revised Chinese version (WAIS-RC) (11), the Trail Making Test A (TMT-A) (7), the Stroop Color-Word Test (modified version) (SCWT) Part D (7), and the Digit Symbol subtest of the WAIS-RC (11) were used to assess processing speed/attention. Executive function was assessed using the Digit Span Backward subset of the WAIS-RC, the Chinese Version of Trail Making Test B (TMT-B) (12), and the SCWT Part C (7). The RAVLT including learning measure (the sum of trials 1-5), immediate and delayed recall, and recognition, and the ROCF including immediate and delayed recall were used to detect memory (7).
The Semantic Category Verbal Fluency Test (animal) (7) and the Boston Naming Test (BNT) as modified by Cheung et al. (13) were employed to assess language ability. Visuospatial skills were verified by the copy part of the ROCF (7), the Block Design of the WAIS-RC (11), and the Clock Drawing Test (CDT) (14) scored by the Rouleau system. The raw scores were documented in all cognitive tests.
MRI
All subjects were scanned with a standardized scan protocol on 1.5-or 3.0-Tesla MRI scanners, including T1-weighted imaging, T2-weighted imaging, diffusion weighted imaging, fluid-attenuated inversion recovery sequence (FLAIR), and susceptibility-weighted imaging (SWI). All scans were visually rated by two trained neurologists who were blinded for the diagnosis.
EPVS were defined as fluid-filled spaces with a signal intensity similar to CSF on all sequences, which followed the course of penetrating vessels. They appeared linear, round, or ovoid, with a diameter generally smaller than 3 mm. EPVS were rated using a four-point visual rating scale on axial-T2-weighted images (1: <10 EPVS; 2: 11-20 EPVS; 3: 21-40 EPVS; and 4: >40 EPVS) for the basal ganglia (BG) and centrum semiovale (CSO) (15). The numbers of EPVS were counted in the slice with the highest number. The numbers refer to EPVS on one side of the brain, and the higher score was used if there was asymmetry between both hemispheres. The interrater reliability for the whole group was 0.753 and 0.807 for the scores of BG and CSO EPVS, respectively (p < 0.001).
WMH were hyperintense on T2-weighted sequences or FLAIR images. The degree of the severity was rated on FLAIR images according to the Fazekas score (range 0-3) (16). Periventricular (PV) and deep white matter hyperintensities (DWMH) were scored separately. The PV WMH score was 0 (absence), 1 (caps or pencil-thin lining), 2 (smooth halo), or 3 (irregular PV lesions extending into the deep white matter). The DWMH score was 0 (absence), 1 (punctate foci), 2 (beginning confluence of foci), or 3 (large confluent areas). The interrater reliability for the whole group was 0.771 and 0.816 for the scores of PV WMH and DWMH, respectively (p < 0.001).
CMBs were defined as small (generally 2-5 mm in diameter, but up to 10 mm) areas of signal void with associated blooming seen on SWI and were generally not seen on FLAIR, T1-weighted, or T2-weighted sequences. CMBs were classified manually as lobar CMBs (suggestive of cerebral amyloid angiopathy) and deep or infratentorial CMBs (suggestive of hypertensive arteriopathy) (17). The former included different cortical regions, and the latter included the basal ganglia, thalamus, internal capsule, external capsule, corpus callosum, deep and periventricular white matter, brainstem, and cerebellum. The interrater reliability for the whole group for the presence of CMBs was 0.882 (p < 0.001).
T1-weighted or FLAIR images were used to investigate regional brain atrophy by three visual rating scales: the medial temporal lobe atrophy (MTA) scale (18), the posterior atrophy (PA) scale (19), and the global cortical atrophy scale-frontal (GCA-F) subscale (20). The MTA was rated based on the fivepoint scale for the left or right side, respectively (0 = normal, 1 = widened choroid fissure, 2 = increase of widened fissure, widening temporal horn, opening of other sulci, 3 = pronounced volume loss of hippocampus, 4 = end stage atrophy). The PA was rated using the posterior cortical atrophy scale (range 0-3) with the average score of the left and right sides (0 = no atrophy, 1 = mild atrophy, opening of sulci, 2 = moderate atrophy, volume loss gyri, 3 = severe atrophy, knife blade). The GCA-F was assessed visually on the scale (range 0-3) (0 = no atrophy, 1 = mild atrophy, opening of sulci, 2 = moderate atrophy, volume loss gyri, 3 = severe atrophy, knife blade). The interrater reliability for the whole group was 0.885, 0.813, 0.900, and 0.846 for the scores of GCA-F, MTA of left or right side, and PA, respectively (p < 0.001).
Statistical Analysis
Statistical analyses were performed using SPSS, version 17.0 (SPSS Inc., USA). Data are expressed as the mean ± SD unless otherwise specified. One-way analysis of variance (ANOVA) was applied for quantitative demographic variables among the control, amMCI, asMCI, nasMCI, and namMCI groups. We compared the results of the cognitive tests using analysis of covariance (ANCOVA) adjusted for age and education. Because multiple cognitive tests were administered, Bonferroni correction for multiple tests was also applied (p < 0.0025). The scores of brain atrophy were also assessed among all groups using ANOVA including the MTA, PA, and GCA-F.
The chi-square test was used to compare the differences between the qualitative variables among five groups, such as the sex ratio. The tests were also used to assess frequency distributions of neuroimaging variables (WMH, CMBs, and EPVS) in different MCI subgroups and healthy controls. All groups were dichotomized according to the score of PV or DMH (≧2 or <2), lobar or deep or infratentorial CMBs (absent or present), and the scores of EPVS for BG or CSO (≧2 or <2).
The Spearman correlation coefficient (r) was used to evaluate the correlations between imaging variables, including WMH, EPVS, MTA, PA, and GCA-F, and age, education, and scores of cognitive tests in all MCI groups. Binary logistic regression analysis was also used to assess the relationships between CMBs and age, education, and scores of cognitive tests and other imaging variables.
All statistical tests were two-tailed, and p < 0.05 was considered to indicate statistical significance.
Comparison of Demographic and Cognitive Data Among Groups
The demographic and clinical data of the patients with amMCI, asMCI, namMCI, nasMCI, and the healthy controls are summarized in Table 1.
The healthy controls were younger than patients with amMCI, asMCI, and namMCI, whereas amMCI patients were slightly older than nasMCI patients. There were no differences in age among other groups. Patients with amMCI had lower education levels than patients with asMCI and healthy controls, but there were no differences in education among other groups. There were no sex differences among different groups.
After adjusting age and education, patients with amMCI performed worse than patients with nasMCI and healthy controls on the MMSE (Bonferroni correction was applied and level of significance set at 0.0025). The scores of the MoCA in patients with amMCI and namMCI were lower than those of healthy controls, whereas patients with amMCI also had worse scores than patients with asMCI and nasMCI.
On the attention/processing speed domain, the Digit Span Forward scores in patients with namMCI and nasMCI were lower than those of healthy controls, while patients with namMCI also performed worse than patients with asMCI. Patients with amMCI and namMCI performed worse on the TMT-A and the SCWT Part D than healthy controls, whereas patients with amMCI spent a longer time on the TMT-A compared with patients with asMCI. Patients with amMCI, namMCI, and nasMCI had lower scores of the Digit Symbol than healthy controls, while patients with amMCI and namMCI did worse than patients with asMCI.
On the domain of executive function, the scores of patients with amMCI and namMCI were lower on the Digit Span Backward than those of healthy controls. Patients with amMCI and namMCI performed worse on the TMT-B than healthy controls and patients with asMCI, while the amMCI group also performed worse than the nasMCI group. As to SCWT Part C, healthy controls and patients with asMCI spent a shorter time than did patients with amMCI.
On the domain of memory, patients with amMCI and asMCI had lower scores than patients with namMCI and nasMCI and healthy controls on the learning measure, immediate and delayed recall, and recognition of the RAVLT. The scores of patients with amMCI and asMCI were also lower than those of patients with namMCI and nasMCI and healthy scores on immediate and delayed recall of the ROCF.
5On the domain of language facilities, patients with amMCI and namMCI had lower scores than patients with asMCI and nasMCI and healthy controls on the BNT. Patients with amMCI and namMCI performed worse than healthy controls on the verbal fluency test, while the scores of patients with amMCI were lower than those of patients with asMCI, nasMCI.
On the domain of visuospatial abilities, patients with amMCI and namMCI had lower scores on the Block Design than healthy controls. Patients with amMCI and namMCI performed worse than patients with asMCI and healthy controls on the CDT. As to the copy part of the ROCF, the scores of patients with amMCI were lower than those of healthy controls.
Comparison of Imaging Data of CSVD Among Groups
We compared the percentages of participants with higher scores of PV WMH or DWMH, lobar or deep or infratentorial CMBs, higher scores of EPVS for BG or CSO, and the degree of brain atrophy represented as scores of MTA, PA, and GCA-F among groups.
The patients with amMCI, namMCI, and nasMCI had more high-grade BG EPVS compared with healthy controls, while the percentages of high-grade BG EPVS in the patients with amMCI were also more than those in patients with asMCI, namMCI, and nasMCI. There were more high-grade CSO EPVS in patients with amMCI in comparison with all other groups (Table 2, Figure 1).
The patients with amMCI and namMCI had more severe DWMH compared with healthy controls, while patients with amMCI also had higher percentages of severe DWMH than patients with asMCI. There were marginal differences between amMCI and nasMCI (p = 0.061) and asMCI and namMCI (p = 0.093). Patients of amMCI and namMCI had more severe PVWMH than healthy controls, whereas the namMCI group also had higher percentages of severe PVWMH compared with the asMCI and nasMCI groups, although the differences were marginal (p = 0.065, 0.051) (Table 2, Figure 1).
Patients with MCI had 19.1% (45/236) deep or infratentorial and 13.1% (31/236) lobe CMBs. Patients with amMCI and namMCI had more deep or infratentorial CMBs than healthy controls, whereas there were also more deep or infratentorial CMBs in patients with amMCI compared with patients with asMCI and nasMCI. The differences between asMCI and nasMCI patients and healthy controls were marginal (p = 0.052, 0.065). As to lobar CMBs, there were no differences among groups (Table 2, Figure 1).
All MCI groups had higher total scores of the MTA than healthy controls, whereas the scores of the amMCI group were also higher than those of the namMCI and nasMCI groups. Patients with amMCI, asMCI, and nasMCI had higher scores of left and right MTA than healthy controls, while there were higher scores of right MTA on patients with amMCI compared with patients with namMCI. Patients with amMCI had higher scores of GCA-F than healthy controls. There were high scores of PA on patients with amMCI, asMCI, and namMCI compared with healthy controls (Table 2, Figure 2).
Correlations Between Variables of CSVD and Cognitive Tests
In MCI groups, BG EPVS had positive correlations with white matter hyperintensities and frontal and posterior atrophy in addition to CSO EPVS, while there were correlations between BG EPVS and the MoCA, and tests of cognitive domains including attention/processing speed, executive function, verbal memory, language facilities, and visuospatial abilities. CSO EPVS had positive correlations with white matter hyperintensities, while there were only correlations between CSO EPVS and learning measures of RAVLT, the verbal fluency, and the SCWT Part D.
PVWMH had positive correlations with medial temporal lobe atrophy in addition to DWMH, whereas DWMH had positive correlations with medial temporal lobe and posterior atrophy. There were correlations between WMH and age and tests involving attention/processing speed and executive function.
Frontal atrophy had positive correlations with medial temporal lobe and posterior atrophy, whereas there was a positive relationship between medial temporal lobe atrophy and posterior atrophy. Frontal atrophy had correlations with age, education, the MMSE, and tests including attention/processing speed, executive function, and verbal and visuospatial memory. Medial temporal lobe atrophy also had correlations with age, education, the MMSE, and tests including attention/processing speed, executive function, and verbal and visuospatial memory. Posterior atrophy only had correlations with age, education, and tests including attention/processing speed, executive function, and learning measures of RAVLT ( Table 3).
Binary logistic regression analysis showed that only age contributed to lobar CMBs (OR = 1.160, 95 CI 1.035-1.301, p = 0.011). There was no relationship between deep or infratentorial CMBs and any other imaging and cognitive indicators.
DISCUSSION
The present study investigated cognitive and imaging features of CSVD in MCI. There were obvious cognitive impairments in different MCI groups, especially amMCI. Different imaging changes of CSVD were also shown in various MCI patients. The patients with amMCI had more high-grade BG PVS followed by patients with namMCI and nasMCI compared with healthy controls, while there were more high-grade CSO PVS in patients with amMCI in comparison with all other groups. EPVS had obvious correlations with PV WMH and DWMH, whereas BG EPVS also had relationships with frontal and parietal atrophy. There was a correlation between BG EPVS and every domain of cognition tests, but COS EPVS had only a relationship with attention, language, and learning parts of the verbal memory test.
Perivascular spaces are extensions of the extracerebral fluid space around arteries, arterioles, veins, and venules, which are the main drainage conduits and form part of the glymphatic system (4). EPVS could involve impairment of cerebrovascular reactivity, blood-brain barrier dysfunction, perivascular inflammation, and abnormal clearance of waste proteins from the interstitial fluid space, ultimately leading to accumulation of toxins, hypoxia, and tissue damage (21). All pathophysiological processes could increase WMH, affect clearance of β amyloid, and cause its accumulation in the brain, which increase the risk of cognitive decline and dementia and play a potential key role on the pathogenesis of AD. Previous studies found that EPVS was associated with WMH but not atrophy (4,22). We demonstrated it in patients with MCI, but the relationship between BG EPVS and frontal and parietal atrophy was also reported. We found that EPVS of various regions had different effects on cognition and MCI. Patients with amMCI had more high-grade BG and CSO EPVS, whereas there was more high-grade BG EPVS in only naMCI patients. A Spanish study reported that BG EPVS not CSO could predict MCI in hypertensive individuals although affected by other markers of CSVD (23). The scores of BG EPVS were higher in vascular dementia than those of AD, and there were no differences on CSO EPVS scores (24). Another study showed that a high degree of white matter EPVS is associated with the number of lobar MBs, while a high degree of BG EPVS is associated with the presence of hypertension. The results meant that white matter EPVS had a relationship with cerebral amyloid angiopathy (CAA) (25). CSO EPVS might indicate the presence of CAA or a mixed hypertensive/CAA. However, we found no correlation between EPVS and CMBs, which may contribute the method assessing CMBs. Participants with severe EPVS in both regions or in the CSO alone had greater decline in global cognition, and the presence of severe EPVS in both regions was an independent predictor of dementia. The study did not find the association between cognitive domain and EPVS (26). Our study found that BG EPVS had a relationship with more extensive cognitive domain compared with CSO EPVS.
The patients with amMCI and namMCI had more percentages of severe DWMH and PVWMH compared with healthy controls. There were more severe DWMH in patients with amMCI compared with asMCI. There were marginal differences between amMCI and nasMCI and namMCI and asMCI in DWMH, and namMCI and asMCI or nasMCI in PVWMH. WMH had a correlation with attention, executive function, and medial temporal lobe atrophy.
The pathogenesis of WHM is not well-understood and could be multifactorial, although strongly associated with cerebrovascular disease and vascular risk factors (4). In the baseline cognitively normal individuals, greater WMH were associated with accelerated multiple-domain cognitive, neuropsychiatric, and functional decline independent of traditional risk factors. WMH also contributed to the development of MCI (27,28). There was no statistical difference within various MCI types regarding DWMH, while PVWMH scores were significantly higher in patients with naMCI than those in patients with aMCI (29). The aMCI group showed elevated temporal and occipital WMH volume relative to the control group whereas the naMCI group showed elevated WMH volume across frontal, parietal, temporal, and occipital regions, suggesting more widespread WMH accumulation. In addition, the naMCI participants showed greater occipital WMH relative to the aMCI (30). Our study demonstrated that the percentages of high-grade PVWMH and DWMH were higher in multiple-domain MCI compared with single-domain MCI or healthy controls. The meta-analysis showed that the association between WMH and overall cognition was significantly stronger for MCI than for AD (31). For both groups, the largest effect sizes were found in attention and executive functions and processing speed. Interestingly, there was also a significant association with the memory domain which is more closely related to AD. The study also suggested that PV WMH were more strongly related to cognition than DWMH. DWMH related to ischemic risk factors may predominantly disrupt the short association fibers, which linked to adjacent gyri. PVWMH linked to atrophic processes was likely to affect the long association fibers that connected the more distant cortical areas. The study involving executive functions revealed the association between PVWMH and working memory, DWMH and inhibition performance, and MTA and flexibility performance (32). The relationship between WMH and attention and executive function was also demonstrated in our study.
The MRI study showed that hippocampal subfield atrophy worsened with increasing CSVD severity, mainly WMH. Greater atrophy was seen with moderate to severe CSVD compared to mild CSVD in the subfields including the subiculum, CA1, CA4, molecular layer, and dentate gyrus. Atrophy in the subfields was significantly associated with poor episodic memory and frontal executive function (33). The present study also certified that WMH had an obvious correlation with MTA.
Patients with amMCI and namMCI had more deep or infratentorial CMBs than healthy controls, whereas there were also more deep or infratentorial CMBs in patients with amMCI compared with patients with asMCI and nasMCI. The differences between asMCI and nasMCI patients and healthy controls were marginal. As to lobar CMBs, there were no differences among groups.
CMBs may represent hemosiderin-laden macrophages in perivascular tissue, secondary to vascular leakage of blood cells. Deep or infratentorial CMBs were hypothesized to be associated with hypertensive microangiopathy, while lobar CMBs may be due to CAA (4). The mechanisms of the association between CMBs and cognitive dysfunction were not well-understood.
Patients with AD and progress subtype of MCI had significantly more new CMB than controls and patients with a stable subtype of MCI during the follow-up (34). Total number of CMBs and of those in deep and lobar regions were associated with attention and executive function and fluency domains (35). The presence of any CMBs, including lobar and deep or infratentorial CMBs, was related to MCI after adjusting for confounders. Furthermore, the presence of multiple microbleeds is associated with lower MoCA total scores and with worse performance on specific domains of cognitive tests, such as global cognitive function, information processing speed, and motor speed (36). Our patients with MCI had 19.1% deep or infratentorial and 13.1% lobe CMBs, which was similar to a previous study (17). There were more deep or infratentorial CMBs in various MCI groups, while those in the amMCI group was also more obvious than the single-domain MCI group. There was no difference in lobe CMBs, which may contribute to high percentages of lobe CMBs in healthy controls selected from memory clinics. We did not find the association between CMBs and cognition, which could be caused by the method evaluating CMBs only absent or present.
Although brain atrophy was attributed to neurodegenerative diseases, many imaging studies reported an association between the presence and severity of SVD and brain atrophy, even including hippocampal atrophy. Moreover, atrophy was an important measure in imaging studies that were done to assess the burden of vascular damage in the brain and cognition (4). Brain atrophy was present in all MCI types and was greater in multiple-domain types particularly in the naMCI (29,37). The present study found that all MCI groups had obvious atrophy of the medial temporal lobe compared with healthy controls, whereas the difference was also obvious in the amMCI group than the naMCI group. Patients with amMCI had obvious frontal atrophy, while there was a prominent parietal atrophy in patients with aMCI and namMCI in comparison with healthy controls. Episodic memory is often considered as a marker of early stage in patients with AD. Scores of episodic memory tests were reported to correlate with hippocampal volumes in MCI, which also were associated with executive function (33,38). Our study showed that frontal and medial temporal lobe atrophy had an obvious correlation with attention, executive function, and learning and memory, whereas there was a relationship between parietal atrophy and attention, executive function and learning part of verbal memory.
The present study has some limitations. First, the participants were selected from our memory clinics, not from the community, which may lead to bias. Second, more patients with amMCI were selected compared with asMCI and namMCI. Thirdly, the healthy controls were younger, and patients with amMCI had lower education levels, but we analyzed the results after adjusting the age and education.
In conclusion, patients with different MCI types had obvious cognitive impairments, especially amMCI. The imaging of CSVD with different frequencies, such as EPVS, WMH, CMBs, and brain atrophy, was found in various MCI groups. There was a relationship between varied neuroimaging features of CSVD and cognitive impairment. All of those meant that the vascular mechanism contributed to the prodromal stage of dementia.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Ethics Committee of the Beijing Tiantan Hospital. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
XL, XT, and JJ contributed to the conception and design of the study. XL analyzed and interpreted the data. XL and XT revised the manuscript. MS, YJ, SJ, ZZ, ZH, and XZ contributed to the participants' enrollment and the clinical assessments. MS wrote the first draft of the manuscript. All the authors contributed to the article and approved the submitted version. | 6,818 | 2021-07-16T00:00:00.000 | [
"Psychology",
"Biology",
"Medicine"
] |
All five-loop planar four-point functions of half-BPS operators in N=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=4 $$\end{document} SYM
We obtain all planar four-point correlators of half-BPS operators in N=4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N}=4 $$\end{document} SYM up to five loops. The ansatz for the integrand is fixed partially by imposing lightcone OPE relations between different correlators. We then fix the integrated correlators by comparing their asymptotic expansions with simple data obtained from integrability. We extract OPE coefficients and find a prediction for the triple wrapping correction of the hexagon form factors, which contributes already at the five-loop order.
Introduction
Correlation functions of local operators are among the most interesting observables to be studied in a CFT. They encode nontrivial physics of the theory that can be accessed using different limits of the correlation functions (large spin, bulk point or Regge limit [1][2][3]). Of all CFTs known, N = 4 SYM stands at a special point where symmetries of the theory might allow to completely solve it. It is then possible to study the effects of finite coupling in a four-dimensional gauge theory, which might lead to better strategies in the study of other quantum field theories.
The most powerful method in N = 4 SYM that exploits these symmetries is integrability, which started with the understanding of two-point functions of single-trace operators in the planar limit [4][5][6]. More recently it was understood how to use integrability to compute higher-point correlators of local operators [7][8][9][10] and even to obtain non-planar quantities [11,12]. This proposal, known as the hexagon approach, has now passed many non-trivial checks both at weak and strong coupling [13][14][15][16][17][18][19]. However, despite being a finite-coupling proposal this program is taking its first steps and there are still aspects that need to be better understood, so it is essential to obtain field-theoretic results which provide further checks and clarify subtleties within the integrability framework.
JHEP11(2018)069
Correlators of half-BPS scalar operators are probably the simplest objects in N = 4 SYM, and the fact that they are finite and do not need infinite renormalization makes them ideal objects to study. While two-and three-point functions are protected, higherpoint functions have an explicit coupling dependence, which motivated their study in the early days of AdS/CFT correspondence, both at weak and strong coupling [20][21][22][23]. More recently, the discovery of a symmetry enhancement [24] has been combined with a lightcone OPE analysis, which allowed to fix the correlator of four O 20 operators to very high loop order [25]. This OPE constraint is very powerful, as it implies exponentiation of the correlator in the light-cone limit, therefore providing recursive relations between different orders in the perturbative expansion of the four-point function. Let us remark that some correlators have also been obtained using bootstrap methods [26][27][28][29][30][31][32].
The goal of this paper is to compute the four-point correlation functions of half-BPS operators with higher R-charge weights, up to five loops. In these generic configurations the symmetry mentioned above is not as strong and the light-cone OPE not as constraining, which means that the integrand cannot be completely determined with these methods. In this work we combine the light-cone OPE analysis with OPE data extracted from integrability, and successfully fix all four-point functions at four and five loops. We want to emphasize that we only needed OPE coefficients that are quite easy to obtain from the integrability point of view, while the data extracted from the four-point functions allows us to make highly non-trivial predictions for finite-size corrections of hexagon form factors. The most important result is the leading five-loop order of the triple wrapping correction, which was originally expected to contribute only from six loops.
In section 2 we describe the symmetries of the correlator's integrand, which allow us to construct an ansatz given in terms of conformal integrals. In section 3 we show how to fix most coefficients in the ansatz by relating the light-cone OPE limit of correlators with different weights. We follow with section 4 where we explain how one can use input from integrability to fix the remaining coefficients. We then present our results for the correlators at four and five loops in section 5, where we also elaborate on the predictions for finite-size correction of hexagon form factors that we can extract from the euclidean OPE limit of the four-point functions. We end in section 6 with our conclusions and future research directions. Finally, appendix A contains a short review on asymptotic expansions of conformal integrals. We also provide an auxiliary file with all four-and five-loop four-point functions, as well as the leading asymptotic expansions for all relevant integrals at that loop order.
Four-point correlation functions and integrands
We consider gauge-invariant operators at the bottom of half-BPS supermultiplets of N = 4 SYM theory. The operator of weight L is realized as a single trace of the product of L ≥ 2 fundamental scalars Φ I (x), I = 1, . . . , 6, O L (x, y) = y I 1 . . . y I L Tr Φ I 1 . . . Φ I L (x) . (2.1) The traceless symmetrization over R-symmetry indices is provided by the auxiliary so (6) harmonic variables y I : y · y = 0. Half-BPS operators are protected -they do not un-
JHEP11(2018)069
dergo infinite renormalization, so their conformal dimension exactly equals to L and the correlation functions of these operators are finite quantities in D = 4. Also the classical (super)conformal symmetry of the N = 4 SYM Lagrangian is inherited by these dynamical quantities. The two-and three-point correlation functions are completely fixed by the conformal symmetry, and their tree-level approximation is exact. For more points the correlators receive quantum corrections. We study the four-point correlators They are highly nontrivial functions containing useful information about dynamics of the theory. At the same time the symmetry constraints considerably simplify their form that makes them more manageable as compared with higher-point correlators.
In the tree approximation the correlators are given by the sum of products of free prop- stretched between scalar fields Φ. Here y 2 ij ≡ y i · y j and x 2 ij ≡ (x i − x j ) 2 . The perturbative expansion of the correlators in the 't Hooft coupling λ = g 2 N c /(4π 2 ) contains a huge number of Feynman diagrams which have to be added together to obtain a gauge-invariant quantity. Thus, prior to any loop integrations, just finding the gaugeinvariant integrand of correlator (2.2) constitutes a nontrivial problem. In this paper we solve this problem up to the five-loop order for arbitrary BPS weights using the integrability methods.
The Lagrangian insertion formula [20] provides a neat expression for the integrand of (2.2) as the correlation function of 4 + operators -four operators O L i and chiral Lagrangian densities L -calculated in the Born approximation, which is the lowest nontrivial perturbative approximation. Let us stress that the Born level (4 + )-point correlator is of order λ , and familiar Feynman diagrams representing this correlator involve the interaction vertices. Nevertheless, G is a rational function of 4 + space-time coordinates x and it is polynomial in harmonic variables y. G carries conformal weight L i and harmonic weight L i at external points E = {1, 2, 3, 4}, and zero harmonic weight and conformal weight (+4) at internal points I = {5, . . . , 4 + }. G is a particular component of the supercorrelator of 4 + half-BPS multiplets. The super-conformal symmetry of the latter implies [24,[33][34][35] that G is proportional to the rational factor R(1, 2, 3, 4),
JHEP11(2018)069
This factor absorbs harmonic weight (+2) and conformal weight (+1) at external points E. The complementary harmonic weights, i.e. L i − 2 at point i ∈ E, can be absorbed by propagator factors, that leads to the following generic form of the Born-level correlator The summation in eq. (2.6) is over tuples {b ij } i<j i,j∈E satisfying constraints j =i b ij = L i − 2 for each i ∈ E. The tuples represent different ways to distribute harmonic weights. Then the conformal weight counting shows that P ( ) The numerical normalization factor C in (2.6) is chosen for the sake of convenience, (2.7) A simple short-distance OPE analysis reveals that G ∼ 1/x 2 pq + O(1) at x p → x q if p ∈ E and q ∈ I or p, q ∈ I. This implies that P ( ) {b ij } has certain discrete symmetries. E.g. the integrand of the four-point function of O 20 operators (L 1 = . . . = L 4 = 2) is specified by one conformal polynomial with {b ij } = {0, 0, 0, 0, 0, 0} which is invariant under all permutations S 4+ of (4 + ) space-time points [24]. In the case of generic half-BPS weights the conformal polynomial P ( ) {b ij } has the reduced discrete symmetry. It is invariant with respect to the same subgroup G ⊂ S 4+ , acting on points E ∪ I, as the accompanying factor (2.8) Obviously G contains S as a subgroup, S ⊂ G , which acts on the Lagrangian points. Thus the construction of the correlator integrand boils down to fixing a number of conformal polynomials P ( ) {b ij } with given discrete symmetries. There is a finite number of them at each loop order and they can be enumerated. Therefore the remaining freedom reduces to a number of numerical constants.
Integrating out internal points I according to (2.3) we rewrite the contribution of each SU(4) harmonic structure in (2.6) as a linear combination of -loop four-point conformally covariant integrals I ( ) (1, 2, 3, 4), where the numerical coefficients c so it can be represented as where I(u, v) is a conformally invariant function and, consequently, it depends on conformal cross-ratios Several examples of five-loop conformally covariant integrals are given in eq. (5.2). The number of linear independent conformal integrals is smaller than one could naively expect on the basis of the discrete symmetries of their integrands. The conformal symmetry implies nontrivial relations among them, e.g.
immediately follows from (2.10). The latter relation reduces the number of independent orientations of the given integral. Applying (2.12) to the conformal -loop subintegrals ( < ) of the -loop integrals we generate 'magic' identities [36] among -loop integrals of the different topology. Also some of the -loop integrals trivially factorize in a product of several lower-loop conformal integrals, and some of the integrals differ only by a rational factor in cross-ratios u, v. These observations enable us to reduce the number of conformal integrals we have to deal with. The number of non-trivially distinct -loop integrals is given in table 1. The asymptotic expansion of the integrals at u → 0, v → 1 is discussed in appendix A and the results are collected in an ancillary file. In the following we denote (2.9) -the integrated contribution of the {b ij } harmonic structure to the r.h.s. of eq. (2.6) -by . As we discussed above it is given by a linear combination of the conformal integrals four-point correlator . (2.14) The correlator is specified by weights {L i } i∈E of the half-BPS operators, and correlators of different weights do not have to coincide. However in each given loop order there is only a finite number of different correlators. This is rather obvious from the point of view of Feynman graphs. Indeed, there is no more than 2 interaction vertices in the corresponding Feynman graphs, consequently for sufficiently large weights {L i } some propagators are spectators. They are stretched between pairs of operators O L i and O L j like in tree graphs.
Thus there is a finite number of functions F ( ) {b ij } at any given loop order . More precisely, there is a saturation bound κ = κ( ) such that and similar relations also hold for any other index b ij instead of b 12 . We expect that minimal value of the saturation bound is Previously it has been proven to be true up to the three-loop order. We argue that it should hold up to the five-loop order. Choosing the saturation bound κ in (2.15) higher than κ min and implementing the correlator bootstrap we find that relations (2.15) hold with κ = κ min . In table 2 we show the number of functions F ( ) {b ij } for κ = κ min modding out permutations of the external points.
Correlator bootstrap with light-cone OPE
Up to now we have not used planarity restrictions. In order to make use of some dynamical constraints on coefficients of polynomials P ( ) {b ij } we consider the planar approximation. In particular we imply that the graphs representing the integrand G, eq. (2.6), have planar topology. In this way we considerably reduce the number of admissible polynomials P ( ) {b ij } . Then we can try to fix the remaining numerical coefficients by means of the OPE analysis.
We would like to impose OPE constraints directly on the integrands. Obviously it is preferable to deal with the rational integrands than with unknown multi-loop integrals. In
JHEP11(2018)069
this way we try to pin down as many coefficients in the ansatz (2.6) as possible. Then we fix the remaining coefficients by extracting more detailed dynamical information from the OPEs of the integrated quantities.
In [37] the four-point correlator O 20 O 20 O 20 O 20 of weights L 1 = L 2 = L 3 = L 4 = 2 was considered, and constraints on the asymptotic behavior of its integrand were found in the light-cone limit x 2 12 , x 2 23 , x 2 34 , x 2 14 → 0. The correlator exponentiates in this limit that implies relations among different orders of the perturbative expansion, so the correlator can be recursively constrained order by order. Using this approach the integrands have been fixed up to three loops at generic N c [37] and up to ten loops in the planar limit [25,[37][38][39].
For higher-weight correlators a similar exponentiation property does not seem to hold. Nonetheless some useful OPE constraints for the integrands are known. In [40] studying the light-cone OPE x 2 12 → 0 of higher-weight Born-level correlators (2.4) in the planar approximation the following relation was obtained where C is defined in (2.7). It compares the leading light-cone singularities of a pair of integrands with different BPS weights. Using (3.1) the correlator integrands of all weights have been fixed up to the three-loop order in the planar approximation. Let us briefly explain the origin of eq. (3.1) following [40]. We consider the contribution of a non-protected operator O L,S of twist L, spin S, which belongs to some representation of SU(4), in the OPE of two half-BPS operators at The tree-level structure constants in the planar approximation satisfy the following relation Consequently, if we could use the tree-level approximation for C L 1 ,L 2 ,O L,S then the OPE contribution of O L,S cancels in the difference of correlators G ( ) from eq. (3.1). In particular it is true for the operators from sl(2) sector (see section 4.2). In order to isolate the appropriate OPE channels we take the limit in (3.1). If we could use the tree-level approximation for the structure constants of generic operators O L,S then a stronger version of (3.1) should hold which was conjectured in [40]. At ≤ 3 loop order it is equivalent to (3.1), but starting from four loops (3.3) is more restrictive. Let us remark that the strong criterion implies the saturation bound κ = κ min (2.16) at least up to five loops. We are going to constraint all higher-weight correlators at four-and five-loops in the planar approximation. For the bootstrap procedure it is essential to consider correlators of all weights simultaneously rather than their subset, since relations (3.1) are more restrictive in the former case. We use the weight-two correlator integrands G ( ) 2,2,2,2 from [37] as an input and constrain higher-weight correlators. Also we make use of additional constraints on the integrand G ( ) 3,3,2,2 following from exponentiation property of the short-distance OPE x 1 → x 3 [37,40] for the corresponding four-point correlator. Neither weak (3.1) nor strong (3.3) criteria are enough to fix all coefficients starting from the four-loop order. Nevertheless, they considerably reduce the number of unknowns, see table 3. In the following we apply the weak criterion to partially fix the integrand and then we use integrability of the three-point functions to pin down the remaining coefficients. The obtained results are in agreement with the strong criterion (3.3).
Constraints on integrated correlators
Using the light-cone OPE relations from the previous section we have greatly simplified the integrands of correlation functions at four and five loops. Meanwhile the integrated four-point functions are given as combinations of four-point conformal integrals. By taking into account their symmetries and relations through magic identities [36], we can see that there is a smaller number of degrees of freedom. For example, while the weak ansatz for the five-loop integrand has 1217 unknown coefficients at bound κ = 5, the five-loop correlators are labeled by 791 independent coefficients, which we now want to determine using input from integrability.
Henceforth, we will be considering the euclidean OPE limit of the four-point functions, where u → 0 and v → 1. We will assume for simplicity that the lengths of the external JHEP11(2018)069 operators are such that L 1 ≤ L 2 , L 3 ≤ L 4 and L 2 − L 1 ≥ L 4 − L 3 , since all other cases can be obtained easily with a transformation of the cross-ratios. The OPE decomposition of this correlator is [35] are the R-charge blocks for the SU(4) representation [n − m, L 4 − L 3 + 2m, n − m] and the conformal block takes the following form in the OPE limit [41] (4. 2) The OPE limit is therefore dominated by operators of lowest twist ∆ − S and the SU (4) numbers are restricted such that we have polynomial dependence on the R-charge crossratios σ, τ Meanwhile, from the point of view of the four-point function, we have to sum over a number of R-charge structures, each accompanied by a function of the two spacetime cross-ratios where we sum over all a ij such that i =j a ij = L j . Not surprisingly, the number of SU(4) representations in (4.3) equals the number of allowed tuples {a ij }, and one can easily relate them. Notice that there are relations between the functionsF ( ) {a ij } as the correlator must be of the form (2.14)F ( ) where the non-vanishing R {α ij } are the components of R(1, 2, 3, 4) from (2.5) {b ij } is given by a linear combination of conformal integrals (see eq. (2.13)), which are evaluated in the OPE limit with the method of asymptotic expansions, and they are given as The asymptotic three point function should be suplemented with finite size corrections from the three mirror edges. Following the procedure from [7] one is instructed to insert resolution of the identity in each of the edges. The states can have any number of particles on them however the higher the particle number the more surpressed the contribution is.
The unknown coefficients of the integrand enter the functions F ( ) {b ij } as in (2.13), and each conformal integral can in principle contribute to all powers of log k (u), which means that all α k in (4.7) will in principle depend on those unknown coefficients. If we look back at the OPE limit of the conformal blocks (4.2), we see that the coefficients multiplying the higher powers of log(u) contain only lower-loop OPE data. This simple observation has non-trivial consequences, as it implies that those terms can be constrained without difficulty by computing the required lower-loop OPE data with integrability.
Constraints from integrability
In order to put constraints on the functions F ( ) {b ij } which enter (4.4), we must understand what we can say about the equivalent picture of conformal block decomposition. Thanks to integrability, we know a lot about the structure of the spectrum [4,5] and structure constants that enter (4.1). For both quantities the prescriptions are especially tailored for decompactification limits. If an operator has large spin-chain length L, then its anomalous dimension is computed with the asymptotic Bethe ansatz. However, when we make L small the prescription needs to be corrected with finite-size effects, which are given by Luscher corrections.
Meanwhile, the OPE coefficients can be computed with Hexagon form factors [7]. This method follows a similar expansion, where the decompactification limit is achieved by cutting the pair of pants. This regime is controlled by three parameters, the numbers of tree level Wick contractions between each pair of operators The asymptotic piece is valid when all l ij are large, but as we decrease the bridge lengths, it must be complemented with hexagon form factors dressed by n ij virtual excitations in the bridge of length l ij , as depicted in figure 1. For simplicity, let us consider the structure JHEP11(2018)069 constant between the external operators of length L 1 and L 2 and an unprotected operator of length L 0 that appears in their OPE. It was shown in [14] that the contribution of n 12 virtual excitations in the bottom bridge l 12 (opposite to the unprotected operator) is suppressed by a factor of g 2(n 12 l 12 +n 2 12 ) . (4.9) This means that even if we put a single virtual excitation in a bridge of length l 12 , the wrapping correction appears at best at l 12 + 1 loops.
We can now use this knowledge when we evaluate the correlator If we pick the contribution of operators O I with SU (4) The reason we treat the case min(n, m) = 0 separately is because it corresponds to OPE channels with extremal three-point functions, where there is mixing with double-trace operators. In that case it is not known how to evaluate the OPE coefficients using the integrability methods, so we restrict the constraint to an obvious tree-level statement. There is still another set of equations we can impose on theF {a ij } , which relates to the fact that opposed wrapping corrections factorize. Apart from a normalization factor N , the computation of the structure constant requires the evaluation of hexagon form factors For the non-extremal case when both n and m are strictly positive, we can impose the equality for all powers of log(u). Meanwhile, for extremal configurations (4.13) might not be valid so we restrict the equation to a tree-level statement. 2 Let us remark that even though we used knowledge from integrability to formulate equations (4.10), (4.11), (4.16) and (4.17), they require absolutely no numerical input from integrable machinery, and yet they introduce powerful constraints on the four-point functions.
OPE data in the sl(2) sector
In the previous subsection we derived constraints on the functions F ( ) {b ij } by looking at the integrability description of three-point functions and using the knowledge of when opposed wrapping corrections first start to kick in. This nice exercise allows us to fix many of the unknown coefficients without having to do any actual computation with the integrability machinery. In this section we explain how to further constrain the integrand by computing the simplest components of three-point functions in the sl(2) sector.
By choosing specific polarization vectors y i for the external protected operators, we can single out the OPE channel in (4.1) with SU (4) 1 A naive power counting would imply that A (1,1,1) shows up at six loops, but we will prove later that the contribution must be present already at five loops. This must happen through the regularization prescription that is introduced to fix the divergences in A (1,0,1) , which could in principle invalidate the factorization property. However, at five loops this affects only operators with symmetric splitting, in which case (4.14) is trivially satisfied. 2 Interestingly enough, once we fix all four-point functions we observe that both (4.11) and (4.17) would be valid if applied to the same log(u) powers of (4.10) and (4.16).
JHEP11(2018)069
and correspond to spin-chain excitations in the sl(2) sector. This is an especially easy sector within the integrability framework, where we can find all solutions to the Bethe equations without difficulty. Since this is a rank-one sector, it is also a relatively easy setup for the computation of structure constants. In order to pick such an OPE channel we should analyze correlators of the form at the leading power of u −l 12 . In terms of the polarization vectors this can be achieved by choosing [42] and then taking derivatives of the correlator (4.22) Notice that only two elements of R contribute for the right-hand side of (4.22), namely R {1,1,0,0,1,1} and R {1,0,1,1,0,1} . This happens because R {2,0,0,0,0,2} is always subleading in u, while the other three terms R {0,...,0} happen to be subleading for the specific polarizations chosen.
In this way we are able to extract sum rules for operators in the sl(2) sector, which we now want to match with sum rules obtained from integrability. By equating them we will be able to determine many of the unknown coefficients in the functions F ( ) {b ij } . The required three-point functions are obtained by a finite-volume correlator of two hexagon operators. This is a hard object to obtain and so one considers the two-point function of the hexagon operators as an expansion around the infinite-volume limit. This is particularly useful at a perturbative level where the finite-volume effects can be tamed order by order in the coupling. Each non-protected operator is represented by its Bethe roots, which are distributed among the two hexagons. 3 The infinite-volume expansion corresponds to inserting a resolution of the identity in each unphysical edge of the hexagon, which in practice is written as an infinite sum of virtual excitations (including the term with zero particles). A schematic representation of this proposal is portrayed in figure 2. JHEP11(2018)069 Figure 2. As we cut the pair of pants in two hexagons, we must partition the Bethe roots u into the sets α andᾱ which populate the physical edge of each of the hexagon form factors. Finite-size corrections are obtained by inserting particle/anti-particle pairs in the mirror edges of the hexagons, denoted here by ψ ij .
The creation and propagation of the virtual excitations costs energy, so their contribution appears at higher orders in perturbation theory. The explicit coupling dependence of different finite-size corrections can be found in [14].
We will consider a ratio of structure constants, where the numerator is the OPE coefficient for a non-protected operator of length L 0 in the sl(2) sector with two protected operators of lengths L 1 and L 2 , while the denominator corresponds to the structure constant for three protected operators of lengths L 0 , L 1 and L 2 where {u}|{u} is the Gaudin norm, µ is the measure which controls the asymptotic normalization of one-particle states, S is the sl(2) S-matrix and A is the two-point function of hexagon operators. In this work it was sufficient to consider the asymptotic hexagon form factors A (0,0,0) and the single-particle wrapping correction in the opposed mirror channel A (0,1,0) , which we now review.
JHEP11(2018)069
Finite-size corrections. The computation of the hexagon with a single virtual excitation in the mirror edge opposed to the unprotected operator boils down to the evaluation of the following integral [7] A (0,1,0) = A (0,0,0) where l 12 is the length of the opposed bridge, T a is the transfer matrix, h 1a the hexagon form factor and µ a (u γ ) the mirror measure for a bound state of a derivatives, see [15] for the precise definition of each of these factors. It is instructive to show the leading order expansion of the integral at weak coupling where Q(u) = i (u − u i ) is a polynomial of degree M and u i are the M Bethe roots for the state under consideration. Notice that the integral in u is divergent for small l 12 and large enough M . As explained in [14], the sum over bound states a cures this divergence, but it is technically hard to perform the sum before the integration in u. It was then shown that (4.27) can be evaluated efficiently with the following method: • Consider the function Q(u) = e iut ; • Do the integral in u by residues; • Write the result of the integration in terms of nested harmonic sums; • Perform the remaining sums by identifying it with harmonic polylogarithms.
The original polynomial can be recovered by acting with Q(−i∂ t ) in the final result. The advantage of using the plane-wave e iut is that it makes the integral more convergent, allowing the evaluation of the integral in u by residues. The sum over bound states is trivialized once one identifies the sum as harmonic polylogarithms. Another advantage is that this method gives at once the finite-size contribution for any state.
Consistency conditions
While the data from asymptotic hexagons and opposed wrapping can introduce strong constraints on the undetermined coefficients, there are certainly many configurations in the sl(2) sector which also require the evaluation of adjacent wrappings. It is however possible to fix coefficients that appear in such configurations without evaluating any adjacent wrapping explicitly, and we will also see how the input of the opposed wrapping correction to ( − 2) loops will help constrain the -loop four-point functions. Once we take the OPE limit of the correlators it is simple to extract sum rules P ( ,n) which are defined by with C ( ) l ij ,l 0k ,I the -loop OPE coefficient for opposed bridge of length l ij , adjacent bridge length l 0k and operator O I with the correct dimension, spin and SU(4) charges. This type of sum rule can be extracted from the analysis of correlators like the one depicted in figure 3.
As explained above, the opposed wrapping contributions factorize in the computation of the structure constant, so we can rewrite it as As we lower the length of the opposed bridge to l 34 < 4, we must add contributions from opposed wrapping, which starts at two loops, so we have Notice that the adjacent wrapping corrections can only start at three loops, which means that A adj,I always simplifies to the asymptotic contribution in (4.33). Therefore the only unknowns are the opposed wrappings B ( ,l 34 ) 1,I , but we obtain an overconstrained system of equations because they appear in sum rules for different splittings l 01 and l 03 . In the sl(2) sector there are L/2 operators of twist L and spin 2, while there are 1/2( L/2 + L/2 2 ) configurations for the splitting of the twist L operator in the four-point function. This poses non-trivial constraints on the undetermined coefficients of the four-point correlators.
Furthermore, if we let both opposed bridges become smaller, with l 12 , l 34 < 4, then the sum rule is We can see that it is related to the sum rules in (4.32) and (4.33), and these relations can be easily implemented with the knowledge of relatively simple objects: asymptotic hexagon form factors and opposed wrapping at two loops. Moreover, if any of the opposed bridges has length bigger than one, then the last term in (4.34) is identically zero. The fact that sum rules for different opposed bridge lengths respect such relations imposes non-trivial constraints on the four-point correlators. Finally, at higher loops the arguments are very similar, with the only difference being that at loops the last term in (4.34) will include opposed wrapping corrections up to ( − 2) loops and A adj,I in (4.33) might include the contribution of adjacent wrapping corrections.
Results
In this section we apply the methods described above in order to fix all four-and five-loop four-point functions of protected operators. Since we could not prove the validity of the stronger version of the light-cone OPE relations (3.3) above three loops, we shall always start from the integrand constrained only by the weak relations of (3.1).
JHEP11(2018)069
We need to obtain the functions F ( ) {b ij } for all indices b ij ranging between 0 and ( − 1). While this bound was proved up to three loops, we do not have a direct proof at higher loops, but its existence is natural from the point of view of Feynman diagrams. At any loop order there is a maximum number of fields that can be involved in a given interaction vertex, which means that for large enough operators there will always be a number of spectator fields. Furthermore, our results seem to indicate that the strong light-cone OPE relations (3.3) are valid at four and five loops, and the strong version of the integrand is the same for all values of the bound larger or equal than κ min ( ), which seems to indicate that is the correct bound.
Four loops
At four loops we expect the bound on the {b ij } in eq. (2.15) to be κ = 3, but in order to test this we start with functions F (4) {b ij } whose indices are bounded at κ = 5. The weak ansatz fixes all 2451 functions up to 149 undetermined coefficients, which is also the number of degrees of freedom in the integrated correlators.
If we impose the equations from section 4.1, we are able to fix 130 of the 149 coefficients. Then we consider correlators in the sl(2) sector by analyzing the configurations from (4.19). If the adjacent bridge length is l 01 and the opposed bridges have lengths l 12 and l 34 , then the asymptotic hexagons are the only contribution up to min(l 12 , l 34 , l 01 + 1) loops. That means that we can compare the data obtained with all log k (u) terms of the correlator for k ≥ 4 − min(l 12 , l 34 , l 01 + 1). There is a remarkable amount of information and we are able to determine 18 coefficients in this way. At this point the integrand is completely fixed up to a single coefficient, which we determine using the consistency conditions presented in section 4.3. We need to evaluate opposed wrapping up to two loops, and by comparing sum rules for different opposed bridge lengths we are able to fix the last coefficient.
In the end, we are able to fix all planar four-loop four-point functions with striking ease. Regarding the result obtained, it is very interesting to observe that the bound on the indices {b ij } does turn out to reduce to κ = 3. Moreover, we find that the solution to the weak version of light-cone OPE (3.1) is consistent with the strong criterion (3.3). We also evaluated all three-and four-loop opposed wrapping corrections for spin 2 operators up to twist 20 and obtained a perfect match with the data extracted from the four-point function.
Five loops
At five loops we expect the bound on the {b ij } from eq. (2.15) to be κ = 4, but once again we test this conjecture by starting with the bound κ = 5. We need to consider 2451 functions F (5) {b ij } , which contain 1217 undetermined coefficients, but when we consider symmetries of the conformal integral and magic identities between them we can show that the integrated correlator depends only on 791 coefficients.
At five loops it is quite difficult to take the OPE limit of the conformal integrals, so only the order (1 − v) 0 of the expansions is available. That means that if we naively take v to one in the conditions of section 4.1 then we might lose some important information. This happens because the SU (4) for 0 ≤ α ≤ l 01 . It is easy to see that the numbers match if one remembers that only representation with L 0 − 2M ≥ L 2 − L 1 are allowed, or equivalently, M ≤ l 01 . Since the representation [0, L 0 , 0] corresponds to operators in the sl(2) sector, we know that the first non-protected operator has spin two and therefore the representation must come with a factor of (1 − v) 2 . Analogously, the representation [1, L 0 − 2, 1] will always come with a factor of (1 − v), which means that there are two linear combinations of the functions (5.1) that will be vanishing at v = 1. In order to obtain a maximum number of constraints from (4.10), (4.11), (4.16) and (4.17) we must then find what those linear combinations are and substitute the expansions of the conformal integrals at the leading non-vanishing order of those equations. Once we take this into consideration, we are able to fix 578 of the 791 undetermined coefficients. Then, just like at four loops, we can consider the data from asymptotic hexagon form factors and compare with the log k (u) terms of the correlator for k ≥ 5 − min(l 12 , l 34 , l 01 + 1), which fixes 70 more coefficients. At this point we use the technique introduced in section 4.3, where we extract adjacent wrapping corrections by looking at correlators with opposed bridges of length 5, and then look for consistent conditions on the data of lower opposed bridge lengths. This proves very effective, and we are able to fix a further 120 coefficients by inputing only two-and three-loop opposed wrapping effects.
At this point we have fixed all correlators up to 23 coefficients. In order to fix those last degrees of freedom, we look again at equations (4.10) and (4.16), but in terms of conformal integrals and not their OPE expansions. For each equation we must consider only the conformal integrals which can contribute at the relevant powers of log(u), and once we do that we notice that all equations at this point depend only on four distinct conformal integrals If p ≥ 7, then both functions on the right-hand side of (5.4) saturate the bound and we have at leading order in u {5,5,0,0,5,5} , for which all orders of log(u) depend on the last undetermined coefficient. Thankfully this correlator has been evaluated in the regime of large p through hexagonalization 4 [43] and we can in this way fix all planar five-loop four-point functions.
It is interesting to note that the solution to the weak ansatz of the integrand is compatible with the strong light-cone OPE relations (3.3) and the bound on the indices {b ij } does reduce to κ = 4 as expected. We also evaluated all four-loop opposed wrapping corrections for spin 2 operators up to twist 20 and once again obtained a perfect match with the data extracted from the five-loop four-point function.
Triple wrapping
As mentioned above, the integrability approach to the computation of three-point functions depends on an asymptotic contribution and finite-size corrections. By considering specific polarizations and/or large enough external operators, one can postpone some of the wrapping corrections to higher loops and in some cases even isolate specific finite-size corrections.
A simple example where this happens comes from considering the following family of four-point functions where n ≥ 2. Looking at the singlet SU(4) representation in the OPE limit of small u and (1 − v) probes the product of structure constants C 22K C nnK where K represents the Konishi operator. As we increase the length n of the operators, the wrapping corrections in the adjacent bridges remain the same, but the contribution of the virtual excitation in the opposed bridge is delayed to n loops. For example, by looking at the configuration where n is six we are able to extract the contribution of adjacent wrappings A adj = A (1,0,0) + A (0,0,1) + A (1,0,1) to the structure constant Perhaps more interestingly, we can now evaluate the difference of sum rules introduced in (4.33) (1,l 34 ,1,1) − P (5,a) (1,5,1,1) (5.8) 4 We thank Frank Coronado for sharing this result prior to publication.
JHEP11(2018)069
which probe the one-particle contribution to the bottom edge. For opposed bridge lengths 2 ≤ l 34 ≤ 4 these correlators exactly match the opposed wrapping contributions (we use the notation introduced earlier A On the other hand, at l 34 = 1 there is a mismatch with the wrapping correction This mismatch occurs when all bridges in the three-point function have length one. The triple wrapping A {1,1,1} was originally expected at six loops, but our results seem to indicate that it contributes already at five loops with This is not unexpected, as the two virtual excitations in the adjacent bridges make the original proposal for the triple wrapping divergent. We expect that the required regularization of this term, along the lines of [15], will anticipate its contribution to five loops. In order to test that the mismatch is indeed due to a triple wrapping, we also studied the OPE limit of the following correlators We isolated the twist three contributions for all values of m and showed that in this case the results are perfectly compatible with the contribution of opposed wrapping for all bridge lengths, proving in that way that the mismatch occurs only when all bridges have length one.
Conclusions
We have obtained all four-point functions of protected operators in N = 4 SYM up to the five-loop order. Our method relies on a combination of two techniques: first we consider light-cone OPE relations between integrands of different correlators, and then we take the euclidean OPE limit of the integrated four-point functions and compare with data obtained from integrability. We extract a myriad of OPE coefficients and check that they perfectly agree with OPE data obtained with integrability (which we did not have to use to fix the correlators). While we have found convincing evidence that the saturation bound in the R-charge structures of four-point functions at loops is ( − 1), it would be interesting to prove this JHEP11(2018)069 statement. Our results also seem to indicate that the strong version of the light-cone OPE relations is valid in N = 4 SYM. This fact should be examined in more detail, as a proof of its validity would tremendously simplify the study of four-point functions of protected operators at higher loops.
By focusing on the correlator of four O 20 operators, we have shown that new wrapping effects appear in the hexagon approach to three-point functions at five loops. This is an example of a fruitful interplay between the integrability machinery and the more standard perturbative quantum field theory methods, and it would now be important to obtain this result from the integrability point of view. Since the regularization of hexagon form factors seems to anticipate wrapping corrections, one should study what are the implications on the positivity of the hexagon perturbation theory [44].
It is also possible to employ integrability in the study of four-point functions, by using the method of hexagonalization. It would be interesting to evaluate the observables obtained in this work with such methods, as there is now a point of comparison. Furthermore, by picking specific polarizations for the external operators one can probe different finitesize corrections of the four-point functions. In principle, this could lead to integrability representations of higher-point conformal integrals, in the spirit of [45].
In this work we considered the euclidean OPE limit of the four-point functions, which was obtained at leading order with the method of asymptotic expansions. However, it would be extremely helpful to evaluate exactly all conformal integrals that appear in the correlators, since that would allow us to take other relevant limits which cannot be accessed by asymptotic expansions.
JHEP11(2018)069
Conformal symmetry can be used to send x 1 to the origin and x 4 to infinity, and the final result is naturally expressed in terms of the ratios The structure of the four-point function is not arbitrary since the short-distance singularities are constrained by the OPE data of the theory. We are interested in the short-distance limit of the integrals, or in other words we want to study the behavior of the integral when x 2 approaches the origin. The main idea behind the method of asymptotic expansions is to divide each integration domain in several regions, so that it is possible to take the shortdistance limit inside the integral. In practice we divide the integration over each internal point x i in two different regions: one where the integration point is close to x 2 and one where it is close to x 3 . In each of these regions we can expand the propagators in the following way: There are 2 regions corresponding to the integration points and in each of these regions the original integral is expressed as a product of two-point integrals. If k integration variables are in the region close to x 2 , then the k-loop integral with external points x 1 and x 2 multiplies an ( − k)-loop integral with external points x 1 and x 3 . Then we use the fact that integrals are not all independent since they satisfy IBP identities. In particular this makes it possible to express any two-point integral as a linear combination of master integrals. These identities can be obtained using a computer implementation of the Laporta algorithm such as FIRE [50]. The values of the master integrals used for this computation were evaluated in [51].
The integrals used here might be useful for other studies and for this reason we include them in an auxiliary file. We have computed the four-loop integrals up to u 0 and (1 − v) 4 , while the expansions of the five-loop integrals are at u 0 and for v = 1.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 10,924.2 | 2018-11-01T00:00:00.000 | [
"Physics"
] |
Parameter Space of Atomic Layer Deposition of Ultrathin Oxides on Graphene
Atomic layer deposition (ALD) of ultrathin aluminum oxide (AlOx) films was systematically studied on supported chemical vapor deposition (CVD) graphene. We show that by extending the precursor residence time, using either a multiple-pulse sequence or a soaking period, ultrathin continuous AlOx films can be achieved directly on graphene using standard H2O and trimethylaluminum (TMA) precursors even at a high deposition temperature of 200 °C, without the use of surfactants or other additional graphene surface modifications. To obtain conformal nucleation, a precursor residence time of >2s is needed, which is not prohibitively long but sufficient to account for the slow adsorption kinetics of the graphene surface. In contrast, a shorter residence time results in heterogeneous nucleation that is preferential to defect/selective sites on the graphene. These findings demonstrate that careful control of the ALD parameter space is imperative in governing the nucleation behavior of AlOx on CVD graphene. We consider our results to have model system character for rational two-dimensional (2D)/non-2D material process integration, relevant also to the interfacing and device integration of the many other emerging 2D materials.
here H 2 O vapor or O 3 , and B denotes the metal precursor, here TMA. The oxidant/precursor dose is calculated from the product of the delivery pressure (P dos ) and the residence time (t dos ), which in CM and PM are both governed by a single parameter ALD pulse time (t pul ). All samples are loaded while the chamber is at the preset deposition temperature (T dep ) and the process chamber is purged with N 2 for more than 10min (t purin ) before the ALD process is started. The purge time between oxidant/precursor pulses (t pur ) is varied depending on T dep . In PM, the samples are exposed to series of oxidant pulses prior to the ALD process, where the pretreatment time (t pretreat ) is determined by the total number of the pulses. In MM, each oxidant/precursor is delivered twice in a quick succession with a very short time interval (t intv ). In SM, the flow in the S-3 process chamber is stopped for several seconds (t hold ) to allow the samples to be soaked in the oxidant/precursor. Table S1. Details of common parameters for all ALD processes, i.e. continuous-flow mode Figure S1.
Parameters Values
Deposition temperature ( using O 3 /TMA. Schematic of the process is shown in Figure S1a and S1b.
Parameters Values
Deposition Figure S1c.
Parameters Values
Deposition temperature (T dep ) Figure S1d.
Parameters Values
Deposition
SI2. Aluminum oxide surface coverage calculation
The surface coverage of ALD AlO x (θ) is calculated based on the contrast observed in SEM images as shown in Figure S2, with bright regions indicate the AlO x covered graphene surface and dark regions indicate the absence of AlO x , i.e. bare graphene surface. Each SEM image (8bit grayscale) is first filtered with a set of Top-Hat and Bottom-Hat transformation using 64x64 morphological structuring element to enhance contrast and eliminate the uneven background illumination. 12 The filtered image is subsequently transformed into a binary image (black and S-6 white) by Otsu thresholding criterion. 3 The bright region area is calculated by estimating the total number of "white" pixels in the entire binary image using 2x2 area calculation rule. 4 The AlO x surface coverage (θ) is then calculated by normalizing the bright region area with the total area of the entire binary images, i.e. the total sum of both "white" and "black" pixels. (Fig S3), AlO x surface coverage (θ) on graphene is found to be increasing with the increase of t pur . At t pur = 10s, AlO x predominantly nucleates as clusters at the ridges of G/Cu surface features. Such a highly heterogeneous nucleation is reflected by the extremely low θ of just ~21% (Fig S3a). While the nucleation is still largely heterogeneous, the nucleation density, particularly in the graphene troughs, improves significantly with the increase of t pur as reflected by the increase of θ to ~42% as t pur increases to 25s ( Fig S3b). Indeed, a nearly homogeneous nucleation with θ >98% can be observed when t pur = 60s (Fig S3c). Nevertheless, a significant decrease in AlO x nucleation density can be observed when t pur is significantly prolonged as reflected by the decrease of θ to ~61% as t pur increases to 300s ( Fig S3d). Such a decrease in θ is expected as desorption, or even thermal decomposition, of H 2 O/TMA also takes place during the ALD process. Due to the relatively low desorption rate of H 2 O/TMA at T dep of 80°C, AlOx still nucleates on the graphene surface, albeit heterogeneously, even when t pur is set to be extremely long. In this study, t pur is carefully selected for each T dep such that it is sufficiently long to prevent the formation of prematurely hydrolyzed TMA species but not too long to result in undesired H 2 O/TMA desorption.
SI4. Aluminum oxide thickness measurement by AFM
The thickness of the ALD AlO x film is estimated by AFM measurement on HOPG surfaces.
HOPG surfaces were chosen as the representative samples due to their similarity to graphene samples in terms of wettability and chemical inertness. As HOPG surfaces are known to be much less wettable by H 2 O than G/Cu, 10,11 the measured values could serve as lower bound values of S-10 the actual thickness of the AlO x film on graphene. In addition, HOPG surfaces could be easily patterned without leaving behind significant residues. In contrast, graphene patterning often results in measurable residues that may skew the AFM measurement. As measured by AFM, 12 ALD cycles result in AlO x film thickness of 1.1(±0.2) nm and a RMS surface roughness of 0.52(±0.01) nm (Fig S4a). For 40 and 60 ALD cycles the AlO x film thickness is measured at 3.7(±0.3) nm and 5.9(±0.3) nm respectively with a RMS surface roughness of 0.54(±0.06) nm and 1.24(±0.47) nm respectively (Fig S4b and S4c). This is equivalent to a film growth rate of 0.088 nm/cycle (Fig S4d).
S-11 respectively. Figure S5b shows the leakage currents in these capacitors, where the leakage currents are lower than 1nA at 0.7 V and 2.2 V for AlO x film deposited in 20 and 50 ALD cycles respectively. These values match well with those of AlO x formation on graphene found in the literature. 12 Hence we are confident that the formation of AlO x in the present study is continuous and does show potential to act as efficient high-k dielectric in graphene electronics with EOT <1.3 nm. 13 Obtaining such an ultra-thin dielectric on graphene is very important for high frequency operation of FETs. (Fig S7d). These ALD processes were selected because they yield conformal AlO x nucleation (θ>98%) with just 12 ALD cycles on all graphene samples (see the main article). The refractive index of the AlO x film (n) was calculated by fitting the obtained ellipsometry data (Ψ and Δ) to the available Al 2 O 3 thin film model for each wavelength at an incident angle of 60°. 6 As can be that the refractive index (n) of CM 80°C (Fig S7d) is consistently the lowest amongst all samples across the entire spectrum. The refractive index (n) of CM 200°C (Fig S7a) is very similar to that for SM 200°C (Fig S7b), and both are found to be consistently the highest amongst the samples across the entire spectrum. Note that a difference in refractive index (n) by 0.02 indicates a difference in AlO x density by ~0.12 g/cm 3 . 5 Thus, this finding suggests that the density of AlO x films deposited at a T dep of 200°C is higher, albeit only slightly, than that at a S-15 T dep of 80°C. However, it is important to note that this finding is obtained from ~10nm thick AlO x films on SiO 2 rather than directly from sub-2nm AlO x films on graphene. contributions, and (c) scatterplot of 2D peak linewidth (Г 2D ) against G peak linewidth (Г G ). All AlO x depositions are performed in 12 ALD cycles total. The doses for both H 2 O and TMA are maintained at ~0.14 Torr·s while that for O3 is set at ~28.65 Torr·s.
SI8. Effect of prolonged ozone pretreatment
The effect of prolonged ozone exposure during the ALD to the graphene quality is assessed by Raman spectroscopy analysis using photon excitation of 532nm (Fig S8). (Fig S8a). Such an increase in I D /I G indicates that defects are being introduced to the graphene structure during O 3 pretreatment and a longer O 3 pretreatment time (t pretreat ) results in a higher defect density. Thus, the use of O 3 in the ALD process is not completely harmless for the graphene even at a low T dep of 80°C. To avoid introducing excessive defects to the graphene, it is therefore necessary to limit the t pretreat to just 2min as it is sufficient to introduce relatively homogenous AlOx with θ of ~96% (see the main article Fig 3d and 3f).
The peak frequency of the 2D band (ω 2D ) and G band (ω G ) are found to be shifted toward higher values with the increase of O 3 pretreatment time, where ω 2D ~2681cm -1 and ω G ~1591cm -1 are observed for PM-2m-O 3 and ω 2D ~2688cm -1 and ω G ~1595cm -1 are observed for PM-15m-O 3 S-17 ( Fig S8b). The shift in ω 2D and ω G modes toward higher wavenumbers indicates a significant increase in the graphene doping level from ~3x10 12 cm -2 to ~5x10 12 cm -2 when t pretreat is set to 2min and to ~6x10 12 cm -2 when t pretreat is set to 15min. While the mechanical strain level remains similar in magnitude between -0.1 --0.2% when t pretreat is set to 2min, it increases to -0.2 --0.3% when t pretreat is set to 15min. 8,9 In addition, the line width of the 2D band (Г 2D ) is found to be Torr, t dos : ~0.7s) for 12 cycles total.
S-18
As a control experiment, AlO x nucleation on bare Cu foils ( Fig S9a) and SiO 2 wafers ( Fig S9b) Consequently, one should adjust the typically used ALD parameters if a homogeneous nucleation on graphene is to be achieved.
SI10. Effect of extended TMA residence time
The effect of prolonged TMA residence time during the ALD to the graphene quality is assessed by Raman spectroscopy analysis using photon excitation of 532nm (Fig S10) Fig. 5d).
The peak intensity ratio between D band and G band (I D /I G ) is found to be increasing with the increase of TMA residence time, where I D /I G of ~0.07 is observed for MM-10s. In contrast, SM-3.5s yields I D /I G of just ~0.04, which is similar to the as-transferred G/SiO 2 (Fig S10a). The peak frequency of the G band (ω G ) are found to be shifted toward higher values with the increase of TMA residence time, where ω G increases slightly from ~1584cm -1 for SM-3.5s to ~1585cm -1 for MM-10s ( Fig S10b). The line width of the 2D band (Г 2D ) and G band (Г G ) is found to be relatively constant with the increase of TMA residence time (Fig S10c). An increase in I D /I G indicates the formation of defects on graphene due to exposure to the highly reactive TMA when the residence time (t dos ) is set to be significantly long. In addition, a shift in ω G indicates a slight increase in the graphene doping level from <10 12 cm -2 to ~10 12 cm -2 while the mechanical strain level remains similar in magnitude between -0.05 --0.15%. 8,9 While the use of TMA with t dos ~3.5s is harmless for the graphene, the exposure to TMA for ~10s results in a more defective and doped graphene. Thus, it is necessary to find an optimal value of TMA residence time at which it is sufficiently long to obtain a conformal AlO x nucleation without inducing excessive damage or doping to the graphene. In this study, t dos of 2-3.5s is found to be optimal. It is important to note that the value of optimal t dos may be different from one ALD system to another. | 2,923.2 | 2016-10-10T00:00:00.000 | [
"Physics"
] |
Synthesis and Biological Evaluation of Certain new Cyclohexane-1-carboxamides as Apoptosis Inducers
Series of 1-(N-phenyl-2-(heteroalicyclic-1-yl)acetamido)cyclohexane-1-carboxamide derivatives (5a-m) and 1-(phenyl(heteroalicyclic-1-ylmethyl)amino)cyclohexane-1-carboxamide (6a-f) were designed and synthesized with biological interest through coupling of 1-(2-chloro-N-phenylacetamido)cyclohexane-1-carboxamide (4) and (phenylamino)cycloakanecarboxamide (2) with different amines. The structures of the target compounds were elucidated via IR, 1H and 13C NMR, MS, and microanalysis. Compounds 5a-m and 6a-f were evaluated for their in vitro antitumor activity against four different cancer cell lines, MCF-7, HepG2, A549, and Caco-2. Compound 5i exhibited a promising activity against breast cancer cell line (IC50 value = 3.25 μM) compared with doxorubicin (IC50 value = 6.77 μM). Results from apoptosis and cell cycle analysis for compound 5i revealed good antitumor activity against MCF-7 cancer cell line and potent inhibition.
INTRODUCTION
Cancer is one of the most prominent diseases worldwide which represents the second cause of human mortality after cardiovascular diseases 1 .Drug resistance to cancer chemotherapy is considered a serious trouble 2 .Thus, there is a critical need for new chemotherapeutic agents 3,4 .The cyclohexane core generally has enriched the medicinal chemistry armamentarium with several bioactive candidates having diverse biological activities such as antipsychotic 5 , expectorant 6 , anticonvulsant 7,8 , analgesic 9 and anticancer activities [10][11][12] .Etoposide (I) and Teniposide (II) contain cyclohexane moiety in their structures and are used in cancer chemotherapy for the treatment of lung cancer, acute leukemia and lymphoma through a cytotoxic mechanism of DNA-topoisomerase II inhibition [13][14][15] .Moreover, the aminoacyl pharmacophore chain and amide moiety were included in the structural frame of different antitumor compounds, III and IV (Fig. 1), which were found to possess antitumor activity [16][17][18][19] .
These findings have encouraged us to prepare the target compounds 5a-m and 6a-f through molecular hybridization tactic of two or more pharmacophore moieties in one molecule aiming to improve the pharmacological profile 20 .
Chemistry
The preparation of the ultimate compounds 5a-m and 6a-f as well as the intermediates 1-4 is illustrated in Scheme 1. Cyclohexanone was reacted with potassium cyanide and aniline in glacial acetic acid to produce the nitrile derivative 1 which was hydrolyzed using sulfuric acid at room temperature to produce the amidic compound 3.
Cell cycle analysis and apoptosis induction
Compound 5i was the most potent against MCF-7 cancer cell line.Consequently, we examined its effect on the cell cycle progression using BD FASCCalibur after treatment with 3.25 μM of 5i for 48 h.Then, cell was stained with an annexin V-FITC antibody and propidium iodide by FACS (Table 2).For the cell cycle, compound 5i revealed induction of apoptosis at pre G1 phase and arresting at G2/M phase.
Chemistry General
Melting points were measured through Electrothermal Capillary apparatus and are uncorrected.The infrared (IR) spectra were recorded on JASCO FT/IR-6100 spectrometer.Spectral data ( 1 H-NMR as well as 13 C-NMR) were performed on Jeol ECA 500 MHz spectrometer and their values of the chemical shift are recorded as ppm on δ scale.Mass spectral data were gained using the technique of electron impact (EI) ionization.Column chromatography was conducted using silica gel 60 and chloroform/methanol 9/1 (v/v) as a mobile phase.
General procedure for the synthesis of 1-(N-phenyl-2-(heteroalicyclic-1-yl)acetamido)cyclohexane-1-carboxamides (5a-m)
To a solution of 4 (1.32 g, 0.0045 mol) in ethanol (30 mL), the appropriate amine derivative (0.0135 mol) was added.Then the reaction was refluxed under stirring for 12 h, and then ethanol was evaporated under reduced pressure.The residual was dissolved in ethyl acetate (30 mL) and washed with water (3x30 mL) then separation of the organic layer, drying over anhydrous Na 2 SO 4 and evaporation under the reduced pressure to produce 5a-m.
Table 1 : Antiproliferative activity for compounds 5a-m and 6a-f
).Most of the compounds are selective and having potential cytotoxicity towards MCF-7 adenocarcinoma with IC 50 values range 3.25-36.8μM as compared with doxorubicin (IC 50 value of 6.77 μM).Compound 5i showed the most potent biological activity with IC 50 value = 3.25 μM. | 903 | 2018-04-25T00:00:00.000 | [
"Chemistry"
] |
Fast and precise inference on diffusivity in interacting particle systems
Particle systems made up of interacting agents is a popular model used in a vast array of applications, not the least in biology where the agents can represent everything from single cells to animals in a herd. Usually, the particles are assumed to undergo some type of random movements, and a popular way to model this is by using Brownian motion. The magnitude of random motion is often quantified using mean squared displacement, which provides a simple estimate of the diffusion coefficient. However, this method often fails when data is sparse or interactions between agents frequent. In order to address this, we derive a conjugate relationship in the diffusion term for large interacting particle systems undergoing isotropic diffusion, giving us an efficient inference method. The method accurately accounts for emerging effects such as anomalous diffusion stemming from mechanical interactions. We apply our method to an agent-based model with a large number of interacting particles, and the results are contrasted with a naive mean square displacement-based approach. We find a significant improvement in performance when using the higher-order method over the naive approach. This method can be applied to any system where agents undergo Brownian motion and will lead to improved estimates of diffusion coefficients compared to existing methods. Supplementary Information The online version contains supplementary material available at 10.1007/s00285-023-01902-y.
Introduction
In many areas of the applied sciences, stochastic differential equations (SDE:s) are a popular and well-studied model framework for modelling processes undergoing both deterministic and random dynamics. Examples of application areas are physics (Van Kampen 1992), chemistry (Van Kampen 1992), biology (Lewis et al. 2013), finance (Shreve 2004) and control theory (Stengel 1986). The application in mind for this paper is models of in vitro cell migration, with the location of a cell at time t being denoted as x(t). In its most general form, an N -dimensional system of Itô SDE:s is given by the equation where μ : R N → R N is the drift function, σ is an N × M diffusion matrix and W (t) is an M × N standard Wiener process. For convergence and well-posedness, μ and σ has to satisfy a set of standard Lipschitz requirements (Klebaner 2012). This framework allows for a large number of natural phenomena to be modelled and studied in a relatively compact way, and is intrinsically linked to the macroscopic phenomena of diffusion of gases and liquids (Krapivsky et al. 2010), where the drift term corresponds to both external forces and intra-particle interaction. Regression of such models to fit observed data is an active field of research among both mathematicians (Iacus 2009) and members of the application communities, e.g. in control theory, stochastic differential equations are usually studied in the context of state space models (Schön and Lindsten 2015). In the most basic of cases, expressions for parameter estimates can be found exactly, such as in the seminal Ornstein-Uhlenbeck process often used as a toy example in physics. In one dimension it is expressed as dx(t) = −αx(t)dt + σ dW (t).
For this equation, we have that the transition probability from a state x(s) to a future state x(t) at time t > s follows a normal distribution with time-dependent mean and variance, and as such a maximum likelihood estimate is readily available for α and σ given observed data. For most real-world applications such simple models constitute important building blocks and learning tools, but are generally insufficient to accurately describe or forecast a real-world system. Furthermore, an expression as elegant as (1.2) is impossible to find for almost all models, and the inference must be carried out using some sort of approximation. Approximations can be carried out in a multitude of ways. For example, one might opt for a likelihood-free approach such as Approximate Bayesian Computation (Picchini 2014), or try to simplify the terms of the equation, hopefully resulting in a tractable expression. An example of such a method is local linearisation of the SDE (Shoji and Ozaki 1998
Mean square displacement and induced sub-diffusivity
The estimation of diffusion rates in interacting particle systems have long been a key component in mathematical biology (Swanson et al. 2000). Through the application of Itô calculus, one can find a relationship between individual-based models and partial differential equations (PDEs) describing the population on a macroscopic level (Oelschläger 1989). The archetypical PDE is given as Here u is the population density, N is the number of particles, D is the diffusion coefficient and V is a function determined by the pairwise interactions (Bruna et al. 2017 (Swanson 2008) for an approach applying equations similar to (1.3) directly to in vitro data. High particle density and interaction forces on otherwise Brownian particles leads to a deviation from their standard diffusive behavior, a phenomena known as induced subdiffusivity. This problem has been studied by physicists for the last few decades, see for example (Spiechowicz and Łuczka 2017) and (Ledesma-Durán et al. 2021). Thus, just considering the MSD of our particle system will not suffice to draw conclusions regarding the diffusion coefficient in the cases of dense, highly correlated particles. In this paper we present a solution to the problem of estimating diffusivity if a mechanistic model of cell-to-cell interaction is available in the form of an SDE system.
Our contributions
In this paper, we will cover a Bayesian conjugacy for certain types of interacting particle systems in two dimensions, useful in tracking problems using microscopy (Dickinson and Tranquillo 1993) but with possible applications to for example satellite data (Farine et al. 2015). The key limitation here is that we consider the case of isotropic diffusion, i.e SDE:s where the diffusion matrix is given by σ = √ 2DI, where D is the diffusion coefficient. However, such a model is applicable in a diverse array of cases, e.g tracking of animal migration or bacterial movement (Browning et al. 2020). The work in this paper pertaining to analytical expressions of approximate transition densities is in itself not new; there exists a wide literature on the subject that cover many different levels of approximation. See for example (Gobet and Pagliarani 2018) for a comprehensive treatise. What is lacking in the literature, however, is simple methods for deriving and using such transitions when facing realistic scientific problems, and thus our contribution is to provide a bridge between the field of stochastic calculus and mathematical biology.
Setting and assumptions
In this section, we will specify what type of SDE models our method applies to. Consider a system of N interacting particles in R 2 , with the system first being observed at time t k . Individually, each particle x i (t):s time evolution is modelled as an autonomous SDE with isotropic diffusion; i.e is a two-dimensional Wiener process and a i (x(t)) : R 2N → R 2 is a twice differentiable vector-valued function modelling the interaction of the particles. We assume that all interactions featured in a i are pairwise and uniform across all pairs of particles, i.e Assume we observe the state of the particle system at equally spaced times t k , k = 0, 1, . . . , K and from this, we wish to conduct inference on σ i . For the context of this paper, we assume that a is a known function.
Brief overview of MSD
The typical way to compute the mean square displacement for a group of random walkers x i (t), i = 1, . . . , N observed at the times t 0 and T > t 0 is by computing If x is the position of random walkers in d spatial dimensions, this quantity relates to Framing this in the context of SDE:s, if the random walkers x i (t) are independent and follows a pure Wiener process; i.e (1.1) with μ = 0 and σ := √ 2D, we see that the expression (3.1) is simply the sample variance of the transition densities (Klebaner 2012) when sampled once for each random walker. Now, from independence of increments in Brownian motion, we can expand this notion given a set of observations Here, t 0 = 0 and t K = T . Crucially, one can note that the quantity MSD * x (T )/d is in the fact the maximum likelihood estimator for D. This follows from the Gaussian increments of Wiener processes; (3.4) We conclude by noting that given (3.4), we can recover the distribution (3.2) by filtering the sum of our observations with respect to only the first observation; This comes from the martingale property of Wiener processes (Klebaner 2012). X |Y should be read as "distribution of X given Y is known".
Derivation of our method
The calculations carried out in this section are inspired by the framework given in Chapter 5 and 10 in Kloeden and Platen (1992), where the interested reader can find information on how to derive similar or higher order schemes for simulating SDE:s. The book does not go into details with applications to interacting particle systems, however, wherein our main contribution lies. For motivation, we will start by defining the basic Euler-Maruyama approximation of Definition 3.1 (Euler-Maruyama with remainder term). Let 0 ≤ t k < t k+1 be a time interval and let x i (t) be a solution to the SDE (2.1). Let the particle state with remainder R 1 on this interval is given bŷ Gradients are to be interpreted as Jacobian matrices; Note that (3.5) can be stated aŝ We use this as a stepping stone to the higher order approximation used in this paper, that will be presented in the following theorem.
Theorem 3.1 (Higher-order approximation for isotropic diffusion). Consider the system described by equation (2.1). Let the particle state x i (t k ) := x ik be known for all given bỹ , and apply this to its occurrence in the L 1 featured in (3.6); We then plug (3.10) into the R 1 in (3.6) and get that where R 2 is a remainder term consisting of (3.11)-(3.12). This gives rise to the the higher order schemẽ and Z 2 (t), once again by Itô isometry; Thus, we arrive at the conclusion that Z 1 (t) and Z 2 (t) can be expressed as a linear combination of two independent standard normal random variables U 1 and U 2 given (3.15) By substituting Z 1 and Z 2 in (3.13) with (3.14)-(3.15), we find that Lemma 3.2 (Non-degeneracy of the estimate). For symmetric matrices A ik , (3.9) constitutes a proper probability distribution.
Proof We will prove this by showing that symmetric matrices A ik indicate a symmetric and positively definite matrix S ik (t), thus satisfying the requirement for S ik(t) to be a covariance matrix. We do this by showing that the smallest eigenvalue λ m of S ik(t) is positive. Set t = t − t k , a i j as the (i, j):th element of A ik and shorthand S ik (t) := S ik . From a lengthy but conceptually simple calculation, we arrive at the following statements; Set μ = 3 2 + ta 11 , ν = 3 2 + ta 22 and ω = ta 12 as a shorthand notation. The condition |S ik | > 0 can then be written as which is trivially true, as only squares appear on the left hand side.
Now let us define
. We will use this to construct a likelihood function for the k + 1:th observation of particle i using the k:th observation of all particles.
Corollary 3.2.1 (A conjugacy for isotropic diffusion). Assume we have K observations of N particles. Denote by τ i := σ −2 i the precision coefficient of the i:th particle. By imposing a prior distribution τ i ∼ Gamma(α 0 , β 0 ), we find the conjugate relationship for the posterior distribution p(τ i |x (1:K ) , θ).
Proof The proof of this corollary is a straight-forward computation using the result of Theorem 3.1. From one observation k to the next, we have the following likelihood function; (3.17) Taking the logarithm of (3.17) and summing over all K observations gives us the log-likelihood for the entire sequence of observations; We now see that with σ −2 i := τ i , this is the log likelihood for giving us the result stated in the corollary.
For frequentist statistics, we can instead use the maximum likelihood estimate for σ i , Note now that in the case of a = 0, (3.19) reduces to the MSD maximum likelihood as stated in (3.3). Another question worth discussing is whether or not further improvements can be made and still keep the conjugacy properties that the introduced method enjoys. For this, we need to take a closer look at the remainder term R 2 introduced in (3.11)-(3.12). Explicitly written, we have that From (3.7)-(3.8), we have that σ i appears in powers of at least two in R 2 , and for the triple integral t t k s t k z t k L 1 L 0 a i (x(u))dudW z ds, σ i appears in a power of three. Since the conjugacy is founded on linear appearances of σ i , we have that the conjugacy covered in this paper is the most exact conjugate relationship available for isotropic diffusion coefficients in interacting systems of stochastic differential equations. Conditions on when the higher order scheme improves upon the Euler-Maruyama scheme can be found in the supplementary material.
Application to in vitro cancer cell migration
We now return to our main interest, which is applying this method to models for in vitro cell migration. We chose to model our cell population using a system of interacting stochastic differential equations with isotropic diffusion, where the interactions are attractive-repulsive. It has however been observed that some cells are more motile than others (Kwon et al. 2019), and as such we choose to give every cell indexed by i its own diffusion coefficient σ i . At a particular moment in time t, the system evolves according to the following set of equations where ϕ(r ) : R + → R + is a positive, monotonically decreasing function so that lim r →∞ ϕ(r ) = 0. We chose ϕ(r ) = e −r -this choice of ϕ gives us the Morse potential as our model of interactions; an example of of this potential is visualized in Fig. 2. Other choices such as ϕ(r ) = 1/r are viable as well, and that particular ϕ gives us the Lennard-Jones potential. r 0 is the equilibrium distance between two cells.
Since the setting of this study is on a microscopic length scale, we fix r 0 = 1. That makes it so that the entire interaction potential is governed by just D e (well depth) and a (well steepness). In the language of the general case covered in Sect. 3, we have the following vectors and matrices; Hessian(U (x ik − x jk )).
Numerical experiment
To display the improvements in inference on the diffusion coefficient a number of in silico experiments were performed. The experiments are designed to mimic the behavior of glioblastoma multiforme cancer cells migrating in vitro. Particle systems were generated from the model (4.1). We use a simulation time step h = 1 corresponding to one second. Cell migration is a slow process, and a typical diffusion coefficient in the setting we are simulating is 0.0013-0.0065 [cm 2 /day] (Swanson et al. 2000). Thus, h = 1 second is "close to continuous" given the scale of the problem. For the sake of simplicity, we express the diffusion coefficient in the unit [average cell diameter 2 /second], since the length scale of the simulation is set so that [average cell diameter] = 1. In Fig. 3, we see a snapshot from a data set generated using the model on the right. On the left, we see the evolution of the MSD over the entire time span for five tagged cells. All of these cells were seeded with the same diffusion coefficient, but they display widely varying MSD outcomes, stemming from interactions with neighbouring cells. Fig. 3 An example of how repulsive-attractive particle systems induce sub-diffusive behaviour on particles with more neighbours. Here, every single particle of the 100 generated have the same inate diffusion coefficient, and the MSD of the two particles free from neighbours reflect the "true" diffusivity. The parameters used to generate this dataset is given as Experiment 0 in Table 1 Four distinct experiments were performed to illustrate how our method improves on using MSD to estimate diffusivity in interacting particle systems. The experiments were designed to capture two dimensions of interest, namely the effects of temporal resolution and the effects of cell density on the accuracy of diffusion estimation accuracy. Experiment n, for n = 1, . . . 4, examines a particle system of an increasing Fig. 4 Representative posterior distributions using the higher order method as given by Corollary 3.2.1 with prior α 0 = β 0 = 0. Ground truth in black number of individuals seeded uniformly in a 40 × 40 square, and confined by hard boundaries in a 50 × 50 square area. The interaction parameters and diffusion coefficients are the same for all these experiments, see again Table 1. We then observe this particle system every 5, 10, 15, 20, 30, 45 and 60 min over the course of two days. Pseudo-code for implementation of the inference algorithm is available in the supplementary material, as well as a GitHub repository containing all code needed to reproduce the figures.
Results
In this section, we present the results from the experiments detailed in Table 1. Throughout all experiments, an improper prior of α 0 = β 0 = 0 is used. In Fig. 4, we see the posterior distributions for four randomly selected cells from the experiment featuring 256 cells (experiment 3), along with a black dashed line marking the ground truth. Here, we note some heteroscedacity in the estimates, both across the population and how observation frequency plays in. In general, we see a pattern of higher variability in accuracy (i.e mode deviation from the true value, marked with a black dashed line) as the inter-observation time increases, with the expected increase in posterior variance (due to fewer samples) also playing a role. Fig. 3. The diffusion coefficents are predictably underestimated, unlike the case in Fig. 4 where much higher posterior precision is observed, at least for frequent observations. To explore the performance of our method as compared to using MSD, we consider the distribution of the modes of the posteriors, as shown in Fig. 6. In the first six panels, we show kernel smoothed histograms of the log modesσ of the posterior distributions for every cell in experiment 3 for different time resolutions. We display the results from estimating the diffusion coefficient using MSD in blue and with our method in red. Here, the global trend hinted at through Figs. 4 and 5 is in full display; we see a systematic error in estimating σ using MSD, with much better accuracy (although at times more variance) using our method. In panel (G), we present our measurement of model performance, the sum of mode deviations. Remembering that we have a ground truth of σ = e −9/2 , we display the sums in blue and red respectively, where E MSD is the error for using MSD and E HOS is the error for using the higher order scheme (our method). A consistently better Fig. 6 Detailed statistics for the results of experiment 3. Although the variance in mode accuracy compared to the ground truth (black) increases for our method (shown in red) for infrequent observations, the sum of mode deviations is consistently lower than for using MSD (displayed in blue). The kernels used for smoothing are normal with standard deviation N −5/8 , in accordance with optimal bandwidth theory (Chen 2017)
Fig. 7
Summary comparison of our model to using MSD for experiment 1-4 at all observation frequencies. More red shades correspond to better performance when using our method, more blue shades better performance for MSD performance for the higher order scheme can be seen across all temporal reoslutions considered.
We finish the presentation of model performances compared to one another by considering the quantity = E MSD − E HOM for each of our experiments at all temporal resolutions, summarized in Fig. 7. Here, every square represents a cell density, given by the rows, and a temporal resolution, given by the columns. The shade of the square corresponds to . Large negative values of corresponds to an advantage of our method comapred to MSD. For the particular datasets used to generate these figures, the (in absolute terms) largest difference was = −0.93134, and thus the coloring use this difference as a benchmark. The performance when using MSD was superior to our method for only two cases; the cases of 64 cells observed for 45 and 60 min respectively. Accordingly, these squares have the bluest shade of purple, and all other squares takes a shade of purple featuring more red hues. The dataset, along with all code, is available at the GitHub repositories https://github.com/GustavLW/Inference along with https://github.com/ GustavLW/Simulation.
Discussion
In this paper, we have proposed a solution to the problem of estimating diffusion coefficients in systems with strong inter-particle interactions that relies on the existence of a model of the inter-particle dynamics. We achieve this by expanding the standard Euler-Maruyama scheme to account for these particle interactions.
First we consider the computational complexity of the algorithm as seen in the supplementary material. Both Euler-Maruyama and the higher order method has a worst case complexity scaling of O(K N 2 ). Upon closer inspection, however, we see that there are about four times as many calculations going into the higher order method (counting all that goes into line 11-13 in Algorithm 1). On the other hand, we have observations of computational performance improvement that by far outweighs this drawback, especially so for particle systems of higher density (see Fig. 7). From the example provided in Fig. 6, we see that the sum of mode deviations for 30 min time resolution using our method is comparable to the 5 min intervals when using MSD.
The application of this process has great potential use in future studies of in vitro cell cultures in particular. It has been noted that in order for a cell culture to remain viable in a laboratory setting, a certain cell density need to be maintained (Gerlee et al. 2022). If one now wishes to estimate the diffusivity of the cells under such circumstances, our simulation study shows ample evidence that MSD is insufficient due to crowding effects. As such, correcting the diffusion estimation at the modest cost of using a model of the interaction is of great interest to mathematicians and biologists alike.
An unavoidable drawback of our method is the requirement of well defined derivatives of the interaction function a, meaning that we are still somewhat limited in what methods we can apply this method to. For example, in purely hard-sphere interactions, which is a popular model for ideal gases (Krapivsky et al. 2010), the first derivative of the interaction term is not well defined along the surface of the sphere. One can circumvent this by for example smoothing the interaction kernel, but the risk remains of numerical issues. Alas, this method is best served by models with soft and smooth interactions, as is common in biology and ecology (Lewis et al. 2013;Oelschläger 1989).
There is an intrinsic relationship between interacting diffusion and the phenomena of sub-and superdiffusion, a phenomena observed both theoretically and experimentally (Stauffer et al. 2007). Anomalous diffusion can emerge in a number of ways from the stand-point of stochastic calculus. On one hand, it can be a deliberate design choice of the model to choose a driving noise with covariance structure different from that of the Wiener process (Benhamou 2007). It can also be emergent from the interaction, e.g an emphasis on repulsive interactions will result in superdiffusive behaviour even when the underlying noise is Brownian (Fedotov and Korabel 2017). In the latter case, the 0.5-order approximation of Euler-Maruyama fails to take this phenomena into account by its very construction. This could be one potential reason why superdiffusion has been observed in crowded environment when naive MSD-methods have been utilized (Smith et al. 2017). The higher order method, however, adjusts the diffusivity by taking the Jacobian matrix of the interaction into account, making inference on the underlying, normal diffusion possible even in the case of seemingly anomalous diffusion.
The method presented in this paper can be combined with other inference strategies to conduct inference on large SDE systems. If other conjugacies exists in the drift term, one could for example construct a Gibbs sampler that alternates inference on the drift parameters and the diffusion coefficients. For less tractable models, one could still divide the inference into blocks, using conjugacies for the diffusion term and likelihood-free inference for other aspects of the model, such as the use of particle filters (Schön and Lindsten 2015). Methods such as these are proven to converge, but the mixing time of such Markov chain Monte Carlo (MCMC) methods are notoriously difficult to study, and convergence can thus be slow beyond feasibility.
It should be noted that the method presented in this paper still relies on a linearisation of the drift term. For frequent observations this is an reasonable approximation, but for infrequent observations this can lead to inaccuracies, as noted in Fig. 7 where our method performed worse than standard MSD for infrequent observations of sparse particle systems. A way to remedy this can be to, instead of assuming constant drift terms between the observations, solving an ODE for the expected value and the variance of each particle location on the interval between observations. While this leads to further computational complexity, it makes the method less sensitive to infrequent observations and is an avenue of further research. Such methods are discussed in detail in for example (Särkkä 2013).
To summarise, we have shown that more exact conjugacies exists given systems satisfying some fairly basic smoothness requirements. The application in mind when this discovery was made was interacting particle systems, but applications can be found in many other settings where accurate inference on a diffusion coefficient in a complex system is of importance, such as finance. | 6,206.2 | 2023-03-29T00:00:00.000 | [
"Physics"
] |
BTR: training asynchronous Boolean models using single-cell expression data
Background Rapid technological innovation for the generation of single-cell genomics data presents new challenges and opportunities for bioinformatics analysis. One such area lies in the development of new ways to train gene regulatory networks. The use of single-cell expression profiling technique allows the profiling of the expression states of hundreds of cells, but these expression states are typically noisier due to the presence of technical artefacts such as drop-outs. While many algorithms exist to infer a gene regulatory network, very few of them are able to harness the extra expression states present in single-cell expression data without getting adversely affected by the substantial technical noise present. Results Here we introduce BTR, an algorithm for training asynchronous Boolean models with single-cell expression data using a novel Boolean state space scoring function. BTR is capable of refining existing Boolean models and reconstructing new Boolean models by improving the match between model prediction and expression data. We demonstrate that the Boolean scoring function performed favourably against the BIC scoring function for Bayesian networks. In addition, we show that BTR outperforms many other network inference algorithms in both bulk and single-cell synthetic expression data. Lastly, we introduce two case studies, in which we use BTR to improve published Boolean models in order to generate potentially new biological insights. Conclusions BTR provides a novel way to refine or reconstruct Boolean models using single-cell expression data. Boolean model is particularly useful for network reconstruction using single-cell data because it is more robust to the effect of drop-outs. In addition, BTR does not assume any relationship in the expression states among cells, it is useful for reconstructing a gene regulatory network with as few assumptions as possible. Given the simplicity of Boolean models and the rapid adoption of single-cell genomics by biologists, BTR has the potential to make an impact across many fields of biomedical research. Electronic supplementary material The online version of this article (doi:10.1186/s12859-016-1235-y) contains supplementary material, which is available to authorized users.
Background
The control of gene expression is tightly regulated by complex gene regulatory networks to achieve cell type specific expression, for example in embryonic [1] and blood development [2]. Moreover, dysregulation of gene expression can lead to disease development, including malignant disease such as leukaemia [3]. A better understanding of gene regulatory networks will therefore not only advance our understanding of fundamental biological processes such as tissue development, but also provide mechanistic insights into disease processes. The earlier versions of high-throughput expression profiling techniques were limited to measuring average gene expression across large pools of cells. By contrast, recent technological improvements have made it possible to perform expression profiling in single cells (See [4] for review). Protocols for the single-cell equivalent of microarray [5], qPCR [6] and RNA sequencing [7] have been developed. One of the key advantages of single cell expression profiling is that it enables the analysis of cells that are rare in number, such as tissue stem cells. In addition, obtaining the expression profiles of single cells is very useful for dissecting the heterogeneity within seemingly homogenous cell populations [2,[8][9][10][11][12].
Because single cell analysis commonly reports expression states for hundreds of individual cells, this unique information offers new opportunities for the development of algorithms that can reconstruct gene regulatory networks. Many network inference algorithms are available [13], which are based on regression, correlation, mutual information and Bayesian networks. However, most of these network inference algorithms only generate a network with static representation of gene interactions. In contrast, changes in network dynamics can be described by using dynamic models, which possess different levels of granularity and precision ranging from the simpler Boolean models to more complex differential equation-based models. More complex models such as differential equation-based models offer high precision predictions, and have been used to describe gene regulatory networks [14][15][16][17]. However, such models rely on a higher number of parameters which are often difficult to obtain and verify. In contrast, a Boolean model is one of the simplest models that can describe the dynamics of a system without the need of many parameters (For reviews, see [18,19]). In a Boolean model, each gene can take a value of 0 or 1, which represents the absence or presence of gene expression respectively. The interactions among genes in a Boolean model are described by Boolean operators like AND, OR and NOT, which closely resembles how biologists describe such interactions. Boolean models were first used to study gene regulatory networks by Kauffman in the 1970s, and since then have been used extensively to study different biological systems [20][21][22][23].
While single-cell expression data offers the advantage of capturing expression profiles at single cell resolution, single-cell expression data are noisier than conventional bulk analysis. The technical noise in single-cell expression data arises due to the low amount of input mRNAs in a single cell. This leads to two major sources of technical noise, which are PCR amplification bias and dropouts [24]. Drop-outs in particular, which represent false negatives where genes are recorded as not expressed due to the low efficiency of mRNA capture from single cells, represent a substantial portion of the technical noise in single-cell expression data. Therefore, network inference techniques that are robust to the effect of drop-outs are required when reconstructing networks using single-cell expression data. Boolean models are relatively robust to the presence of drop-outs due to the binarisation of expression values. Two recent studies reported algorithms for inferring Boolean models from single-cell expression data [2,25]. Chen et. al. developed SingCellNet, which uses a genetic algorithm to construct probabilistic Boolean models from expected trajectories through cell states [25]. However, SingCellNet is restricted to small networks with less than 10 genes, and it only determines the network structure and transition probabilities from single-cell expression data. The Boolean rules in SingCell-Net are constructed via manual curation from the literature. In another study, SCNS was developed by Moignard et. al. to infer an asynchronous Boolean model by analysing trajectories through a state transition graph [2]. In order to infer a Boolean model using SCNS, a connected state transition graph is required, which can be difficult to obtain from single-cell expression data. This is because the higher the number of genes to be included in SCNS, the more cells will be required to build a connected state transition graph. In addition, SCNS can only infer network structure by using discretised expression data, which not only leads to the loss of information, but also makes SCNS sensitive to the discretisation method used. Lastly, both SingCellNet and SCNS rely on known general trajectories through the cell states, which require single-cell expression data from at least two cell types with known relationships.
Here, we present a model learning algorithm BTR (BoolTraineR), that is able to reconstruct and train asynchronous Boolean models using single-cell expression data. BTR differs from other algorithms described above in that it can infer both network structure and Boolean rules without needing information on trajectories through cell states. We developed a scoring function based on the Boolean framework, which performed favourably in comparison to a scoring function for Bayesian network. We show that BTR outperforms other network inference algorithms when initial networks are supplied. Lastly, we demonstrate the capability of BTR by training Boolean models using single-cell qPCR and RNA-Seq data from haematopoietic studies.
Results and discussion
A framework for scoring Boolean models with single cell expression data A Boolean model B is made up of n genes x 1 , …, x n and n update functions f 1 , …, f n : {0, 1} n → {0, 1} each associated with a gene (Fig. 1a). Each gene can take a value x ∈ {0, 1}, which represents the absence or presence of gene expressions. Each update function f is expressed in terms of Boolean logic by specifying the relationships among genes x 1 , …, x n using Boolean operators AND (∧), OR (∨) and NOT (¬). The main difference of asynchronous with other Boolean models is the update scheme used during simulation. An asynchronous Boolean model uses the asynchronous update scheme, which specifies that at most one gene is updated between two consecutive states. Asynchronous updating is critical when modelling developmental systems that generate distinct differentiated cell types from a common progenitor, because synchronous updating generates fully deterministic models and therefore cannot capture the ability of a stem cell to mature into multiple different tissue cells.
A state in a Boolean model B is represented by a Boolean vector s t = {x 1t , …, x nt } at simulation step t. States can be generated from an initial state by systematically changing one variable at each step according to the Boolean function associated with that variable. If a state has already been encountered earlier, it is ignored. This results in a directed graph of states as exemplified in Fig. 1b, where any two connected states change in just one variable. When all the states in the directed graph are taken together, they represent a model state space. The initial state used in a simulation can be obtained from the expression values at time = 0 for a time-series expression dataset, or it can be obtained from the expression values of known parental cell types.
Of note, the model state space of an asynchronous Boolean model closely resembles a single-cell expression data. The model state space contains predicted expression states that are dictated by a known gene network that underlies a Boolean model; while the single-cell expression data can be viewed as a data state space which contains observed expression states that are dictated by an unknown gene network. By fine-tuning the network rules underlying the Boolean model, it should be possible to produce a predicted model state space that closely resembles an observed data state space, thereby allowing us to reconstruct the unknown gene network. BTR uses this framework to reconstruct a Boolean model from singlecell expression data (Fig. 1c). In this framework, a Boolean model is represented by its model state space, while a single-cell expression dataset is represented by its data state space. By utilising the novel Boolean state space (BSS) scoring function (See Methods), BTR evaluates how well a particular Boolean model explains the single-cell expression data by scoring the model state space with respect to the data state space. During the model training process, BTR uses a swarming hill climbing strategy to generate minimally modified Boolean models based on an initial Boolean model. These minimally modified Boolean models are then scored using the BSS scoring function, and BTR selects the best scoring Boolean models for the next iteration. By performing this process iteratively, BTR reconstructs the asynchronous Boolean model that can best explain a single-cell expression dataset.
Boolean state space scoring represents a powerful scoring function for Boolean models How well BTR performs depends heavily on the performance of the BSS scoring function. Among different modelling frameworks, the Bayesian network framework is known to possess several well-established scoring functions that evaluate how well a particular network fits a given dataset. These scoring functions include log-likelihood, Bayesian information criterion (BIC), Bayesian Dirichlet and K2 (See [26,27] for reviews). Since expression data have continuous values for gene expressions, we have selected the BIC scoring function, which can handle continuous variables, as a scoring function from the Bayesian network framework for comparison purpose.
BSS and BIC scoring functions were evaluated using synthetic data. The true network and expression data in the synthetic data were generated using GeneNetWeaver [28], which is also used in the DREAM5 network inference challenge [13]. In order to simulate the zero-inflated property of single-cell expression data due to the presence of drop-outs, we introduced zero inflation into the synthetic data as described in the Methods section. An ideal scoring function should give an increasing distance score, as the evaluated network becomes increasingly different from the true network. In order to test this, we generated a list of modified networks that are increasingly different from the true network in terms of edges. As Bayesian networks and Boolean frameworks imposed different network structure constraints, the modified networks were generated separately to give a list of modified Bayesian networks and another list of modified Boolean networks. Although the modified Bayesian and Boolean networks are not identical, they possess the same number of differing edges when compared to the true network, ranging from 2 edges up to 40 differing edges. Five independent benchmark data, each with a different true network, true data and modified models, were used in the evaluation of scoring functions.
By evaluating networks using zero-inflated synthetic data, both BSS and BIC scoring functions performed well when acyclic networks are considered (Fig. 2). Both (See figure on previous page.) Fig. 1 Boolean model, asynchronous simulation and the framework underlying BTR. a A Boolean model can be expressed graphically in terms of nodes and edges, as well as in tabular form in terms of update functions. Note that the small black node refers to AND interaction. b The asynchronous update scheme is best explained with the use of a graph representation of state space, in which each connected state differs in only one node. Starting from the initial state s 1 = {0, 0, 1, 1} and evaluated using the update functions in (a), asynchronous simulation produces a model state space with 15 states. The initial state is shown in red node, while the final steady state is shown in pink node. c The framework underlying BTR. A Boolean model can be simulated to give a model state space, while a single-cell expression data can be preprocessed to give a data state space. Boolean state space scoring function can then calculate the distance score between the model and data state spaces. Lastly, BTR uses the computed distance score to guide the improvement of the Boolean model through an optimisation process that minimises the distance between model and data state spaces scoring functions were able to give increasing distance scores as the underlying networks become increasingly different from the true network. The BSS scoring function achieves this by considering the input expression data as a data state space, and then computing the distance score by comparing the data state space with the model state space simulated from a given network. It is expected that as a network become increasingly different, its model state space will become increasingly different from the data state space, which is reflected in the distance score as shown in Fig. 2c. To the best of our knowledge, this is the first time a scoring function that is based entirely on the Boolean modelling framework has been demonstrated to give comparable performance with a scoring function for Bayesian networks.
As indicated in the results for Network 2 (Fig. 2c), the BSS scoring function is dependent on the underlying true network structure in certain cases and will work better on distinguishing networks that are very different. However the BSS scoring function has a distinct advantage over scoring functions for Bayesian networks. The Bayesian networks are known to impose relatively strict constraints on permissible network structures, in particular Bayesian networks are not allowed to contain any cyclic network structure. Therefore scoring functions for Bayesian networks cannot be used to evaluate cyclic networks. Cyclic networks are ubiquitous in biological systems, in which cyclic motifs can be present in the form of negative and positive feedback loops. Boolean models on the other hand are allowed to have any number of cyclic motifs in the networks. Therefore, the BSS scoring function can be used to compute scores for cyclic networks. By using another five independent benchmark data with true networks that contain at least one cycle, the distance scores for modified networks were computed (Fig. 3). The distance scores for cyclic networks have more fluctuations compared to acyclic networks due to the presence of cyclic motifs. However, the general trend where the distance scores increase as the underlying networks become increasingly different from the true network was still observed.
We have also evaluated the series of acyclic and cyclic networks using non zero-inflated data (Additional file 1: Figure S1 & Additional file 2: Figure S2). When the results computed with non zero-inflated data are compared to the results computed using zero-inflated data, we can see that zero-inflation has no effect on BIC scores and a small effect on BSS scores that does not affect the general trend (Additional file 3: Figure S3). In summary, the relative mean scores that average across the results of all networks (Fig. 4) show that although the BIC scoring function performs slightly better than the BSS scoring function, the BSS scoring function has the advantage that it can evaluate cyclic networks.
BTR accurately infers the networks underlying synthetic datasets
Next, we compared the network inference performance of BTR with other well-known network inference algorithms. Two search algorithms guided by the BSS Boolean and BIC Bayesian network scoring functions were included in the comparison, indicated as BTR and BIC respectively. The search algorithms used for both scoring functions are both based on hill climbing. The additional network inference algorithms included in the comparison are BestFit [29], ARACNE [30], CLR [31], bc3net [32], GeneNet [33] and Genie3 [34] (See Methods for brief details on the algorithms).
By using the same synthetic networks, as well as both non zero-inflated and zero-inflated synthetic data, we performed network inference using the synthetic expression data alone without any extra information. In contrast to the DREAM5 challenge [13] which also provides perturbed expression data, only a single type of expression data is provided to all the network inference algorithms, which is the wild type time course expression data in steady state. For BTR, besides performing inference with only expression data (indicated as BTR-WO), we also performed inference with both expression data and initial networks (indicated as BTR-WI) to show that BTR is able to use initial networks with known network structure to improve the inference process. The initial networks are generated randomly to contain 18 edges that are different compared with the true networks. The performance of the network inference algorithms is assessed in terms of F-scores [35] (Fig. 5). In order to allow comparisons on the performance across all network inference algorithms tested, we calculated the Fscores based only on the presence or absence of edges, while ignoring any additional information such as the types of edges.
(See figure on previous page.) Fig. 2 BSS scoring function compares favourably with BIC scoring function on acyclic networks. a Acyclic networks generated from GeneNetWeaver that are designated as the true acyclic networks. Each node corresponds to a gene. Black edges indicate activation interactions, while red edges indicate inhibition interactions. Mean distance scores computed using b BIC scoring function and c BSS scoring function for modified networks that are increasingly different from the true network in terms of edges using zero-inflated synthetic expression data. The modified networks contain from two edges up to forty different edges when compared with the true network. Each data point is the mean distance score of 100 different random modified networks that contain the same number of different edges with respect to the true network. The error bar is the standard error of the mean In terms of acyclic networks, the results show that the top inference algorithms using either non zero-inflated or zero-inflated data are BTR-WI, CLR, BIC and BTR-WO. As for cyclic networks, the top inference algorithms differ between using non zero-inflated and zeroinflated data. BTR-WI, BTR-WO, CLR and BC3NET gave the best performance with non zero-inflated data, while BTR-WI, ARACNE, GENIE3 and CLR gave the best performance with zero-inflated data. When all results are taken together, BTR-WI, CLR, BTR-WO and GENIE3 gave the best performance overall. Note that the ranking of network inference algorithms in this study differs from the ranking of the DREAM study because different scoring criteria are used (F-score is used here as opposed to the area under the precision-recall (AUPR) and receiver operating characteristic (AUROC) curves in the DREAM study); and the DREAM study was done using multiple types of synthetic data, such as expression data with gene perturbations. In general, the presence of drop-outs affects the performance of network inference algorithms in different ways (Fig. 5b). In cases such as bc3net and GeneNet, their performance decreases when drop-outs are present, while the impact of drop-outs on the performance of BTR is minimal. Interestingly, the performance of BestFit increases with the presence of drop-outs, possibly due to better binarisation of data due to the information given by dropouts. As both BTR and BestFit are algorithms for inferring Boolean model, this result provides further support that Boolean models are robust to the presence of dropouts in single-cell expression data.
When given an initial network as in BTR-WI, the BTR algorithm was able to perform very well in locating the true network. While the performance of the BTR algorithm without an initial network (BTR-WO) is comparable with other inference algorithms, BTR-WO scored less well compared to BTR-WI. This indicates that the greedy hill climbing search strategy implemented in BTR may not be able to traverse the solution space efficiently without any initial information. Taken together, while BTR can be used for reconstructing network models without initial information, BTR performed the best (See figure on previous page.) Fig. 3 BSS scoring function is able to calculate distance scores for cyclic networks. a Cyclic networks generated from GeneNetWeaver that are designated as the true cyclic networks. Each node corresponds to a gene. Black edges indicate activation interactions, while red edges indicate inhibition interactions. b Mean distance scores computed using BSS scoring function for modified networks that are increasingly different from the true network in terms of edges using zero-inflated synthetic expression data. The modified networks contain from two edges up to forty different edges when compared with the true network. Each data point is the mean distance score of 100 different random modified networks that contain the same number of different edges with respect to the true network. The error bar is the standard error of the mean Fig. 4 Summary of BIC and BSS scoring functions. Mean scores have been calculated across all networks (five acyclic networks and five cyclic networks) for BIC and BSS scoring functions calculated using zero-inflated synthetic expression data. All scores have been standardised for comparison purpose, such that the scores range from 0 to 1 when it is used to train and improve on existing networks that contain a partially true structure. It is also worth noting that BTR produced a dynamic model with a directed underlying static network, in contrast to most other algorithms such as CLR that only produce an undirected static network.
BTR predicts gene interactions by training haematopoietic Boolean models
We next wanted to apply BTR to biological data to evaluate its utility to biologists. Haematopoiesis research has provided many paradigms for modern biological research, and was one of the first fields to embrace single cell expression profiling [5,36,37]. Moreover, literature curated Boolean network models have been reported both for blood stem cell maintenance and blood progenitor differentiation [38,39]. The single-cell expression data used here includes single-cell qPCR and single-cell RNA-Seq data, which are both obtained from [10]. The two Boolean models will be referred to as the Bonzanni model [39] (Fig. 6a) and the Krumsiek model [38] (Fig. 6c). Both models had been constructed via manual literature curation by the authors of the original papers. The Bonzanni model aimed to capture haematopoietic stem cell (HSC) self-renewal capacity, while the Krumsiek model describes the differentiation process of the erythro-myeloid lineage in haematopoiesis.
We firstly trained the Bonzanni model using singlecell RNA-Seq data collected from HSCs. Compared to the original model, the resulting trained Bonzanni model (Fig. 6b) shows the deletions of ten gene interactions and the additions of thirteen gene interactions ( Table 1). The state space of the trained Bonzanni model contains 1486 states when simulated using the initial state used in the original study (Fig. 7a). Of note, there are many densely connected transitional states in the state space, which may be related to the complexity of cell fate decision making processes in multipotent progenitor cells. Steady state analysis performed showed that the steady states of the trained Bonzanni model are almost identical to the steady states of the original Bonzanni model (Fig. 8a), except with the absence of cyclic steady states. The authors suggested that the cyclic steady states in the original Bonzanni model correspond to the self-renewal Mean F-scores of network inference algorithms inferred using a non zero-inflated synthetic data and b zero-inflated synthetic data. Ten true synthetic networks (Five each for acyclic and cyclic networks) were used in the assessment of these network inference algorithms. Plots titled 'Both' show the combined results of acyclic and cyclic network inference. The error bar is the standard error of the mean maintenance loop in HSCs, which is not present in our trained model possibly because the number of cells profiled by single-cell RNA-seq is not enough to sufficiently capture the HSC self-renewal expression signature. We then trained the Krumsiek model by using single-cell qPCR data collected from over 450 cells along the erythro-myeloid lineage, which includes common myeloid progenitors, granulocyte-monocyte progenitors and myeloid-erythroid progenitors. In order to demonstrate that BTR can be used in cases where we may want to extend a current Boolean model by adding more genes to it, we have used BTR to train and add two additional genes to the Krumsiek model. The resulting trained Krumsiek model (Fig. 6d) contains three deleted gene interaction and twelve added gene (Table 1) when compared to the original Krumsiek model. For the two additional genes Ldb1 and Lmo2, BTR has predicted gene interactions among Ldb1, Lmo2, Fli1, Gata1 and Gata2. Previous studies have shown that genome-wide binding profiles for Lmo2, Gata2 and Fli1 show significant overlaps [40], and that Ldb1 also occupies nearly all of the binding sites of Gata2 [41], consistent with a model where these TFs engage in combinatorial interactions. The state space of the trained Krumsiek model contains 21 states when simulated using the initial state used in the original study (Fig. 7b). The two steady states reachable in this state space may correspond well to cell populations that are primed for the erythrocyte and myeloid lineage divergence. When examining the steady states reachable from all possible initial states, the trained Krumsiek model produces additional steady states when compared with the original model due to the addition of two extra genes (Fig. 8b), which may correspond to intermediate cell types along the erythro-myeloid differentiation pathway.
Taken together, the result suggests that both the trained Bonzanni and Krumsiek models have been trained by BTR to predict new gene interactions which give rise to interesting state spaces and steady state properties. Note that the state space of the trained Bonzanni model is substantially larger than the state space of the trained Krumsiek model due to the denser interactions among genes and a lower proportion of inhibitory edges in the trained Bonzanni model (Additional file 4: Figure S5).
Conclusions
We have developed the BTR model learning algorithm for training asynchronous Boolean models using single-cell expression data. The key component in BTR is a novel Boolean state space (BSS) scoring function, which BTR uses to infer a Boolean model through an optimisation process. We have shown that the new BSS scoring function is capable of giving meaningful scores to networks when compared with the BIC scoring function for Bayesian networks. We then showed that when compared to other network reconstruction algorithms, BTR gave the best result when initial networks were provided. In two case studies, we have demonstrated that BTR is capable of suggesting modifications to existing Boolean models based on information from single-cell qPCR and RNA-Seq data. Finally, we anticipate BTR to be a useful addition to the current toolbox for processing and understanding single-cell expression data, as it provides significant new capabilities for regulatory network modelling in a user-friendly way.
Definitions
A Boolean model B consists of n genes x 1 , …, x n and n update functions f 1 , …, f n : {0, 1} n → {0, 1}, with each f i being associated with gene x i (Fig. 1a). Each gene x i corresponds to a binary variable representing the expression value of the gene, i.e. x ∈ {0, 1}. Gene x i is a target gene when it acts as a response variable and an input gene when it acts as a predictor variable. Each update function f i can be evaluated to give a value to a target gene x i , and is expressed in terms of Boolean logic by specifying the relationships among a subset of the input genes x 1 , …, x n using Boolean operators AND (∧), OR (∨) and NOT (¬). An update function f i consists of an activation clause and an inhibition clause in the form of: Each clause is individually expressed in disjunctive normal form, (u 1 ) ∨ (u 2 ) ∨ (u 3 ) ∨ … ∨ (u n ), where u represents a slot which can either take in a single input gene x i or a conjunction of two input genes x i ∧ x i + 1 . An example update function f 1 (s t ) for a target gene x 1 with an input state s t is given below: A few constraints are imposed on the update functions during model learning in BTR. Firstly, the update function allows a conjunction of up to two input genes in each slot u. Secondly, each input gene x i can only be present in a single update function once, but the same input gene x i can be present in multiple update functions. Thirdly, a user is able to specify a soft limit on the number of input genes (i.e. in-degree) allowed per update function, where the default in BTR is 6 in-degree per gene. Lastly, by default no self-loop is allowed in BTR.
A model state given by a Boolean model B is represented by a Boolean vector s t = {x 1t , …, x nt } at simulation step t. A model state space S represents the set of all model states s t reachable from an initial model state s 1 , i.e. S = {s 1 , …, s t }. S can be obtained by simulating the model B starting from an initial model state s 1 using the asynchronous update scheme. The asynchronous update scheme specifies that at most one gene is updated between two consecutive states (Fig. 1b). Assuming we have a model state s t which is not a steady state, there will be i (i ≥ 1) genes in s t such that x it ≠ f i (s t ). Therefore at simulation step t + 1, s t + 1 would have i possible configurations s t + 1 i , where s t + 1 i = {x 1t , …, f i (s t ), …, x nt }. This simulation is repeated until it reaches a steady state. By definition, steady states are a set of states whose destination states also belong to the same set. That is, a steady state may be a single model state s t , or it may consist of a cyclic sequence of model states s t , …, s t + j .
The single-cell expression data used in this study are each a matrix consisting of n individual genes in the columns and k individual cells in the rows. The expression data are normalised and standardised to give y kn ∈ [0, 1]. A data state v k = {y 1 , …, y n } represents the expression state of cell k for n genes that are observed in the cell. A data state space V = {v 1 , …, v k } represents the set of all data states that are observed in an experiment.
BTR model learning
The aim of BTR is to identify a Boolean model B with x n genes and f n update functions, that can produce a model state space which closely resembles an independent single-cell expression data (i.e. data state space). Note that model state space and data state space are defined in a similar way, the only difference being that the n genes take continuous values in [0, 1] within a data state, while the n genes take binary values 0 and 1 in a model state. The distance between model and data state spaces is measured by the pairwise distance between pairs of model and data states, as stated in the scoring function (See below). By iteratively modifying an initial Boolean model B 1 , the distance between the model and data state spaces can be minimised until a resulting final Boolean model B f with less distance is obtained. BTR performs model learning by utilising techniques in discrete optimisation framework. In any optimisation problem, there are two important components, namely a scoring function and a search strategy.
BSS Scoring function in BTR
The scoring function used in BTR is a novel scoring function we developed, termed as Boolean state space (BSS) scoring function. BSS scoring function g(S, V) is a distance function, which consists of a base distance variable and two penalty variables. g(S, V) is given by: Where h(S, V) = base distance, ε = penalty variable, λ = constant for penalty variable.
The base distance h(S, V) is given by the following equation. To prevent multiple model states from matching to a single data state, one-to-one matching between model and data states is enforced if the number of data states, N v , are more than or equal to the number of model states, N s , i.e. N v ≥ N s . For cases where N v < N s , one-to-one matching between model and data states is enforced greedily up until the point where every data states have been assigned a matching model state, then non-unique matching will occur for the remaining model states with respect to each corresponding data state with the minimum distance.
Where d s t ; v k ð Þ = pairwise distance between each model state s t and data state v k (0 ≤ d(s t , v k ) ≤ 1), N s = number of model states, N v = number of data states, n = number of genes.
The distance between model state s t and data state v k , d(s t , v k ), is defined as the sum of the absolute differences between values of each gene i in model state s t and data state v k .
Where x ti ∈ {0, 1} is the value of gene i in model state s t and y ki ∈ [0, 1] is the value of gene i in data state v k .
The two penalty variables, ε 1 and ε 2 , in g(S, V) are used to prevent underfitting and overfitting. ε 1 penalises depending on the proportions of 0 s, p 0 , and 1 s, p 1 , across all genes and all states in a model state space. The concept of ε 1 is that it penalises complexity in Boolean models by their simulated model state spaces. We have shown that as a Boolean model becomes more complex (i.e. increase in the number of edges), both p 0 and p 1 of its model state space will become closer to 0.5 (See Additional file 5: Figure S4), therefore making ε 1 a good penalty for model complexity.
where a ¼ X i∈ 0; 1 f g p i −0:5 ð Þ 2 0:5 ε 2 penalises based on the number of input genes present in each of the update function f i in a Boolean model B, given a specified threshold z max .
Where w i the penalty for each update function f i is given by: Where z i = the number of input genes in update function f i , z max = the maximum number of input genes allowed per update function. The default z max in BTR is 6, which means that each target gene is encouraged to have not more than 6 input genes.
Search strategy in BTR
A good search strategy is required in optimisation to locate the optimal solutions within a high dimensional and complex solution space. The search strategy in BTR is a form of swarming hill climbing strategy, in which multiple optimal solutions are kept at each search step and the search only ends when the score converges for all of the optimal solutions (Fig. 9). In BTR search algorithm, the search starts from an initial Boolean model, and iteratively explores the neighbourhood of the current Boolean model in the solution space by minimal modification. When no initial model is given to BTR, it will generate a random initial model whose degree distribution satisfies a power-law distribution with a degree exponent γ = 3.
The minimal modification of a Boolean model is performed by adding or removing a gene from a single update function in the Boolean model. The resulting modified model is then evaluated by the BSS scoring function. By repeating this procedure, BTR is able to explore the solution space and eventually arrives at a more optimal Boolean model. Due to the nature of Boolean models that multiple possible Boolean models can give rise to the exact same simulated state space, BTR usually retains a list of equally optimal Boolean models at the end of the search process. In such cases, a consensus model, whose edges are weighted according to the frequencies of their presence in the list of optimal Boolean models, will be generated. Due to the design of the search strategy, it is more geared towards a local search Fig. 9 Pseudocode of the search algorithm in BTR rather than a global search. Therefore in line with the results shown in Fig. 5, BTR is best used for iteratively improving a gene network with known biological knowledge using an independent set of single-cell expression data.
BTR data processing BTR is capable of handling all types of expression data, including qPCR and RNA-Seq. Expression data should be processed and normalised before being used in BTR. In BTR, the expression data is further processed in order to facilitate score calculation by the BSS scoring function. Firstly, if the input data is qPCR expression data, it should be inversed such that the gene with a low expression level should have a low value and vice versa. Finally, the expression values for each gene in the data are scaled to continuous values with a range of 0 ≤ x ≤ 1.
Calculation of F-score F-score, which is the harmonic average of precision and recall, represents precision and recall concisely [35], is often used to assess the performance of network inference algorithms. Precision denotes the proportion of edges that are truly present among all edges classified as present, while recall denotes the proportion of edges that are truly present among all correctly classified edges (including both edges that are present and absent) [42]. The calculations were performed on directed adjacency matrix.
Precision is defined as: Where TP = true positive and FP = false positive. Recall is defined as: Where TP = true positive and FN = false negative. F-score is defined as:
Synthetic data
The synthetic data used for comparing scoring functions and network inference algorithms consist of true networks, expression data and lists of modified networks. The true networks and expression data were generated using GeneNetWeaver version 3.13 [28]. The true networks contain 10 genes each and were extracted from the gene network of yeast. Each true network generated by GeneNetWeaver was then categorised into acyclic and cyclic networks. A total of 5 acyclic and 5 cyclic true networks were used in this study. The expression data were generated using ordinary and stochastic differential equations based on the true networks. A single time series expression data with 1000 observations were generated per true network, and the expression data were simulated under steady state wild type condition. A coefficient of 0.05 was used for noise term in the stochastic differential equations. The synthetic expression data as generated by GeneNetWeaver is used as non zeroinflated data. In addition, the synthetic expression data is converted into a zero-inflated data to simulate dropouts in single-cell expression data by calculating the probability of a reading being a drop-out (i.e. zero value) based on its expression level. The probability of a reading being a drop-out, p d , is modelled using the following equation: Where c = a constant (in this study, c = 6), and y = a reading of the expression level of a particular gene, The lists of modified networks were generated in R using the bnlearn package [43] for Bayesian networks and the BTR package for Boolean models. The modified networks were generated by modifying the number of edges that differ from the true network, ranging from 2 edges up to 40 differing edges. The modified Bayesian networks and the modified Boolean models were generated separately due to different underlying structural constraints imposed by each framework. In Bayesian framework all networks must be directed acyclic graphs, while Boolean models do not have such restrictions. In contrast, Boolean models require explicit specification of activation and inhibition edges, while Bayesian networks handle activation and inhibition implicitly without modifying the edges. Although the generation of modified Bayesian networks and Boolean models were done separately and therefore they are not identical, all modified networks contain the same number of differing edges (2 to 40 edges) with respect to the true network. Note that the differences in edges for acyclic modified networks are not cumulative, due to difficulties in generating a directed acyclic graph with cumulative edge differences. The differences in edges for cyclic modified networks are also not cumulative to maintain consistency with the acyclic modified networks.
For synthetic data, the initial state used for the simulation of Boolean models is the expression values at time t = 0.
Haematopoietic data
Two Boolean models of haematopoiesis were used as initial models for model learning in this study, namely | 9,900.8 | 2016-01-01T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Fasudil, a Clinically Used ROCK Inhibitor, Stabilizes Rod Photoreceptor Synapses after Retinal Detachment
Purpose Retinal detachment disrupts the rod-bipolar synapse in the outer plexiform layer by retraction of rod axons. We showed that breakage is due to RhoA activation whereas inhibition of Rho kinase (ROCK), using Y27632, reduces synaptic damage. We test whether the ROCK inhibitor fasudil, used for other clinical applications, can prevent synaptic injury after detachment. Methods Detachments were made in pigs by subretinal injection of balanced salt solution (BSS) or fasudil (1, 10 mM). In some animals, fasudil was injected intravitreally after BSS-induced detachment. After 2 to 4 hours, retinae were fixed for immunocytochemistry and confocal microscopy. Axon retraction was quantified by imaging synaptic vesicle label in the outer nuclear layer. Apoptosis was analyzed using propidium iodide staining. For biochemical analysis by Western blotting, retinal explants, detached from retinal pigmented epithelium, were cultured for 2 hours. Results Subretinal injection of fasudil (10 mM) reduced retraction of rod spherules by 51.3% compared to control detachments (n = 3 pigs, P = 0.002). Intravitreal injection of 10 mM fasudil, a more clinically feasible route of administration, also reduced retraction (28.7%, n = 5, P < 0.05). Controls had no photoreceptor degeneration at 2 hours, but by 4 hours apoptosis was evident. Fasudil 10 mM reduced pyknotic nuclei by 55.7% (n = 4, P < 0.001). Phosphorylation of cofilin and myosin light chain, downstream effectors of ROCK, was decreased with 30 μM fasudil (n = 8–10 explants, P < 0.05). Conclusions Inhibition of ROCK signaling with fasudil reduced photoreceptor degeneration and preserved the rod-bipolar synapse after retinal detachment. Translational Relevance These results support the possibility, previously tested with Y27632, that ROCK inhibition may attenuate synaptic damage in iatrogenic detachments.
Introduction
Detachment is well-known to affect synapses in the outer plexiform layer (OPL). 1,2 Synaptic injury begins with retraction of the rod presynaptic terminals towards their cell bodies. Axonal retraction results in the disjunction of the first synapse in the visual pathway as the rod presynaptic terminal disconnects from postsynaptic bipolar dendrites. 3 Cone terminals also are disrupted; they lose their synaptic invaginations and normal connections with bipolar cells. 2,4 The retraction of rod terminals is followed after a short period by extension of bipolar dendrites into the outer nuclear layer (ONL) and sprouting from horizontal cells. 2 Horizontal cell processes sometimes extend well into the subretinal space. Although this pathology originally was described in animal models, the same features, rod axon retraction and bipolar and horizontal cell sprouting, have been reported in humans after detachment. 4,5 Reattachment allows the photoreceptor outer segments (OS) to regrow. 6 However, it does not restore retinal synaptic structure completely. 7 In fact, additional anomalies occur, including inappropriate photoreceptor sprouting from rod terminals into the inner nuclear layer (INL). Approximately a quarter to one-half of patients with either macula-on or maculaoff detachments, respectively, do not recover visual function to levels equivalent to the fellow eye, even with anatomically successful surgery. [8][9][10][11] It has been suggested that the lack of visual recovery after reattachment can be attributed in part to continued disruption of synaptic connectivity. 4,12 Thus, preservation of synaptic structure in the injured retina may have significant therapeutic benefit.
We have focused on the synaptic changes in rod terminals, since they are morphologically striking and relatively easy to monitor. Examining first isolated salamander photoreceptors in culture, then retinal explants from adult pigs, and finally iatrogenicallyproduced detachments in living pigs, we demonstrated that activation of the RhoA signaling pathway leads to axonal retraction by rod cells (Fig. 1A). [13][14][15] We hypothesized that ROCK inhibition could reduce axonal retraction. Initially, we used Y27632, an experimental ROCK inhibitor that binds competitively to the ATP binding site of ROCK and, thus, prevents kinase activity. It inhibits both isoforms of ROCK, I and II. When applied to isolated photoreceptors, to retinal explants, or to the subretinal space of detached retinae in vivo, Y27632 consistently reduced the amount of rod axon retraction. [13][14][15] Based on these findings, we sought to examine the effectiveness of a clinically approved ROCK inhibitor, fasudil, that also inhibits ROCK I and II by binding to the ATP binding site. 16 Our initial experiments in the living pig demonstrated that synaptic change occurs very rapidly after injury. Within the first 2 hours after detachment, axonal retraction has begun. There was no evidence of bipolar dendritic sprouting at this time point, demonstrating that rod-bipolar connections were breaking quickly. 15 Therefore, in this report, we have chosen to assess early changes, that is, at 2 hours after the detachment injury.
Animals
Pig eyes are similar in size and vascular anatomy to the human eye, and porcine retina contains rod and cone cells. [17][18][19] Although there is no fovea in the pig, there is an area centralis that is rich in cones. Thus, experiments on adult pigs are useful for translational work.
Female Yorkshire pigs, 3 months old and weighing 30 kg, were obtained from Animal Biotech Industries (Danboro, PA). Pigs were kept on a 12-hour light/12hour dark cycle for at least 2 days before surgery in an Association for Accreditation and Assessment of Laboratory Animal Care (AAALAC)-accredited pathogen-free facility. The animals were subject to overnight fasting and ad lib access to water before surgery. A total of 16 animals (32 eyes) were used.
Pig eyes used to create retinal explants were obtained from a local abattoir through Animal Parts (Scotch Plains, NJ). The male/female Yorkshire pigs, 6 months old and weighing 60 to 75 kg, were sacrificed mid-day; the eyes were kept on ice and delivered to the lab within 2 hours. A total of 9 eyes were used.
Experimental procedures and methods of euthanasia were approved by the New Jersey Medical
Retinal Detachment Procedures and Drug Administration
For retinal detachments in vivo, animals were premedicated with atropine (0.02 mg/kg; VetUS, Henry Schein, Dublin, OH) and anesthetized with ketamine (20 mg/kg; Mylan Institutional LLC, Galway, Ireland) and xylazine (2.2 mg/kg; LLOYD Lab., Shenandoah, IA) injected intramuscularly, and then catheterized and intubated. Animals were administered pre-and postoperative intravenous injections of buprenorphine (0.01-0.05 mg/kg; Reckitt Benckiser HealthCare, Hull, England) and enrofloxacin (10 mg/ kg; Bayer HealthCare, Shawnee, KS). To maintain anesthesia, the animals were given 1% to 3.0% isoflurane in oxygen using a ventilator. Lactated Ringer's solution was infused intravenously at a rate of 8 mL/kg/h. Under anesthesia, pupils were dilated with topical application of 1% tropicamide (Bausch & Lomb, Tampa, FL) and 2.5% phenylephrine hydrochloride (Paragon Bioteck, Portland, OR). A standard 3-port vitrectomy was done using 20-gauge instrumentation. The posterior hyaloid was elevated off the area centralis using active suction, and a core vitrectomy was completed. During and after vitrectomy, the vitreous cavity of the eye was perfused with a mammalian balanced salt solution (BSS; Alcon, Fort Worth, TX) containing 1 lg/mL epinephrine (Henry Schein, Dublin, OH). A bent 33-gauge metal cannula with a 50 to 100 lm tip was used to slowly inject BSS or fasudil (Selleckchem, Boston, MA; dissolved in BSS) subretinally to create a retinal detachment (~10-15 mm in diameter) in the inferior nasal quadrant (Figs. 1B, 1C). For intravitreal administration, 150 lL of 200 mM fasudil dissolved in BSS was injected with a 30-gauge needle into the vitreous cavity (entering~3 mm posterior to the limbus) either immediately after or 2 hours after retinal detachment surgery. After the eye was treated with drug for 2 hours, the animal was sacrificed for enucleation. Eyes were kept in ice-cold Dulbecco's Modified Eagle's Medium (DMEM) containing 4.5 g/L glucose, Lglutamine and sodium pyruvate (10-013 CV; Cellgro, Mediatech, Manassas, VA) until opened to collect samples, usually approximately 20 minutes later. Once the eyes were opened, the anterior segment and any remaining vitreous humor were removed carefully, and a few drops of DMEM were added to keep the surface of the neural retina moist. The neural retinal tissues were collected as diagrammed in Figure 1 after fixation in 4% paraformaldehyde (EMS, Hatfield, PA) in 0.125 M phosphate buffer (PB; pH 7.4) for morphological analysis.
Western Blotting
Explants of detached retina were created as described previously. 15 Briefly, after the surrounding orbital tissue was removed, the eyes were washed twice with DMEM. The anterior segment and vitreous body were removed carefully, and a few drops of DMEM were added to keep the surface of the neural retina moist. Buttons of retinal tissue (6 mm) were created using a trephine and detached from the underlying retinal pigment epithelium by injecting a few drops of DMEM along the cut edge. The detached neural retinae were removed gently and cultured in Neurobasal-A media (10888-022; Gibco, Life Technologies, Grand Island, NY) supplemented with 1% GlutaMAX-I (Gibco), 2% B-27 Supplement (Gibco), and 100U/mL of penicillin and 100 lg/mL streptomycin (Gibco) at 378C for 2 hours, and then snap-frozen with dry ice plus ethanol and stored at À808C for further use.
Immunohistochemistry
Retinal samples were fixed overnight at 48C and then immersed in 30% sucrose for an additional night at 48C, embedded and frozen in OCT compound (Sakura Finetek, Torrance, CA), and cut into 15-lm thick sections using a cryostat as described previously. 15 Sections were immunolabeled for SV2 (Developmental Studies Hybridoma Bank, Iowa City, IA) with secondary antibodies conjugated to Alexa Fluor 488 (Life Technologies, Norwalk, CT), followed by propidium iodide (PI; 1 lg/mL; Sigma-Aldrich Corp., St. Louis, MO) to stain nuclei. Labeled sections were covered with Fluoromount-G medium (SouthernBiotech, Birmingham, AL) and preserved under coverslips sealed with nail polish. Sections were examined with a confocal microscope (LSM510; Carl Zeiss Microscopy, Jena, Germany) by scanning 1.0 lm optical sections with a 363 oil immersion objective. Control sections were processed simultaneously with experimental sections but without primary antibodies.
Quantification of Axonal Retraction
All data were collected by researchers masked to the sample identifications. Brightness and contrast were set to obtain unsaturated images. Laser power and scan rate were unchanged throughout a single experiment. Enhancements in brightness and contrast were performed (Photoshop 7.0; Adobe, Mountain View, CA) only for presentation purposes. SV2 immunolabeling was analyzed as described previously. 15 Briefly, a binary mask was created for each image and the ONL was outlined; all labeled pixels in the ONL were counted using ImageJ (v1.45s; NIH). The measurements are reported as pixels per micrometer of the ONL length. Data were collected from two to four sections per specimen examining at least three different areas of each section.
Statistical Analysis
All data were tested with a paired Student's t-test. Use of the paired t-test was based on the experimental design. For images, data from the same region of right and left eyes of the same animal or from detached and attached regions of the same retina were paired. The use of the paired t-test assumed linearity of the data and a normal distribution of the paired differences. For Western blots, explants from the same eye were compared. The fixed normalization point technique was used; control data were set at 100%. 20 For data derived from examination of tissue sections, no normalization was done. Statistical analysis was performed with Sigma Plot (version 11, Chicago, IL) or GraphPad Prism (v5.0, La Jolla, CA). The graphics were produced using GraphPad Prism. Data are expressed as mean 6 SD. We set a (type I error rate) at 0.05.
Subretinally Administered Fasudil Reduced Rod Photoreceptor Axonal Retraction
Normally, SV2-immunolabeling is observed only in the inner segments (IS) of photoreceptors and the synapses in the OPL and inner plexiform layers (IPL), but after retinal detachment, SV2-labeling spreads into the ONL (Figs. 2A, 2B). Label in the ONL appears in rod cell somata or as discrete puncta similar in size to an individual rod spherule. Label in the ONL is due to retraction of the rod axon terminal and axon. 2,15 Although cone synapses are affected by detachment, their axon terminals do not retract.
In the past, ROCK inhibition with Y27632 reduced the amount of retraction observed in the ONL by rod photoreceptors in vitro and in vivo. [13][14][15] The question we addressed in the present experiments is whether fasudil also can reduce synaptic disruption. For these experiments, we used a live pig retinal detachment model as described in our previous study. 15 BSS was used to create a retinal detachment in one eye as a control, and 1 or 10 mM fasudil dissolved in BSS was used to create a retinal detachment in the fellow eye in the corresponding nasal inferior retinal quadrant (Figs. 1B, 1C). Doses were based on previous success with 1 and 10 mM Y27632. The eyes were removed and fixed after 2 hours. SV2-labeling in the different retinal areas (BC and BD, attached and detached areas, respectively, in the eye using BSS for detachment; FC and FD, attached and detached areas, respectively, in the eye using fasudil for detachment) were compared (Fig. 1).
With 1 mM fasudil treatment, we detected no difference in axonal retraction in the treated detached area (FD) compared to the untreated detached area Additionally, there was no difference between detached and attached retinal areas within each eye, treated and control. Thus, 1 mM fasudil had no effect on retraction. However, as reported previously, 15 retraction occured quickly, that is by 2 hours after detachment, and was present where the detachment occurred as well as in regions several millimeters away from the injury.
When the concentration of fasudil was increased to 10 mM, axon retraction was reduced. The number of SV2-labeled pixels per lm ONL length was decreased significantly in the treated detached retina (FD) by 51.3% compared to the untreated detached retina (BD; Fig. 3). Axonal retraction was decreased by 24.7% in the attached retina of the treated eye (FC) compared to the corresponding area, BC, in the control eye, but the reduction was not statistically significant. Within a single eye, the amount of SV2 labeling in the attached versus detached areas (BD versus BC and FD versus FC) also was not different; although area BD had SV2-labeling in the ONL that was 40.5% higher than area BC, this was not statistically significant. Thus, a subretinal injection of 10 mM fasudil primarily reduced axon retraction in the detached retina where the reduction was profound.
Fasudil Administered Intravitreally Reduced Rod Photoreceptor Axonal Retraction
Clinically, intravitreal injection of drugs is more straightforward than subretinal injection, which requires specialized equipment and may cause other complications. Therefore, we also applied fasudil with an intravitreal injection to test for reduction of photoreceptor axonal retraction after retinal detachment. The volume of the eye was determined by filling the posterior eyecup with liquid after removal of the anterior segment and vitreous as shown in Figure 1B. The volume of the eye cup was approximately 3 mL Comparison of SV2-labeled pixels/lm ONL length in different retinal areas. There was a significant reduction of SV2-labeled pixels, by 51.3%, in FD (29.2 6 2.7 pixels/lm ONL) compared to BD (60.7 6 2.7 pixels/lm ONL) indicating a reduction of axon retraction (**P ¼ 0.002, n ¼ 3 animals, 300 images, 6 SD). There was no significant difference between BC and BD in SV2-labeled pixels.
for a 30 kg Yorkshire pig. To achieve an effective dose of 10 mM, 150 lL of 200 mM fasudil was injected intravitreally using a 30-gauge needle approximately 3 mm posterior to the limbus immediately after retinal detachment. Two hours after treatment, the pig was enucleated and the retinae were prepared for histology. Because the intravitreal injections presumably allowed drug to reach detached and attached retinae equally, we combined the data of these two regions (BC, 24.3 6 9.6; BD, 25.2 6 9.9; FC, 18.7 6 8.6; FD, 16.5 6 6.4; combined BSS, 24.7 6 5.6; combined Fasudil, 17.6 6 4.2; all in pixels/lm of ONL length). The number of SV2-labeled pixels/lm of ONL length in the detached and attached area in the treated eye (Fasudil) was significantly less by 28.7% (P ¼ 0.04, n ¼ 5 animals) than the average value in the control (BSS) eye (Fig. 4). The reduction was smaller than that for the subretinal injection; however, it remained statistically significant when tested with the nonparametric Wilcoxon signed-rank test (P ¼ 0.03). The results indicated that immediate treatment with fasudil through intravitreal injection reduces axon retraction after retinal detachment.
Fasudil Reduced Phosphorylation of Cofilin and MLC in Retinal Explants
We have shown that the RhoA signaling pathway is activated after retinal detachment in vitro and in vivo; [13][14][15]21 ROCK phosphorylates MLC directly and stimulates its downstream effector LIM kinase (LIMK), which in turn phosphorylates cofilin (Fig. 1A). To test whether fasudil was acting by inhibiting ROCK activity, we assayed the phosphorylation of cofilin and MLC in vitro, in pig retinal explants detached from the retinal pigmented epithelium, after a 2-hour treatment with 30 lM fasudil. Western blot analysis showed that the ratio of p-coflin/cofilin-total is significantly reduced by 25.8% (P , 0.05, n ¼ 5 eyes, 10 explants) in fasudil-treated samples compared to controls (Figs. 5A, 5C), while the amount of total cofilin did not change. For MLC, the ratio of pMLC/ MLC was significantly reduced by 23.2% in the fasudil-treated group compared to control, while total MLC was elevated but not significantly (Figs. 5B, 5D). These results indicate that fasudil inhibited the ROCK signaling pathway.
Fasudil Reduced Rod Cell Death after Retinal Detachment in Delayed Treatment
To test whether delayed treatment with fasudil can prevent axonal retraction, 150 lL of 200 mM fasudil (10 mM final concentration) was injected intravitreally 2 hours after retinal detachments were made by subretinal injection of BSS. The animals remained under anesthesia for another 2 hours before they were sacrificed for morphologic analysis. Analysis of axonal retraction showed that there was no significant difference between the fasudil-treated group (24.9 pixels/lm ONL length) and the control group (20.4 pixels/lm ONL length; n ¼ 4). However, we discovered that in the retinae detached for 4 hours, unlike retinae detached for only 2 hours, 15 there were nuclei stained densely by PI, and, therefore, defined as pyknotic, in the ONL in regions occupied by rods (Figs. 6A, 6B). Nuclear condensation is a hallmark of apoptosis, and PI staining has been used to identify apoptosis in retinal detachment. 22 With 10 mM fasudil treatment the number of pyknotic nuclei was reduced substantially by approximately 55.7% at 4 hours (P , 0.001, Fig. 6C). This observation suggested that fasudil was able to reduce rod cell degeneration.
Discussion
In the normal retina there virtually is no synaptic membrane protein immunolabel in the ONL. However, only 2 hours after detachment, as reported previously, 15 synaptic protein label, and, thus, axon retraction, appeared in the ONL. In addition, retraction occurred in detached retinae and in tissue several millimeters away. The presence of axon retraction correlates with increases in activated RhoA, which was reported in the detached retina but also in retina outside the detachment. 15 Thus, these findings confirm our previous analysis, that the injury response to retinal detachment is not confined to the area of detachment but affects large areas of adjacent attached retina within hours.
Fasudil proved to be effective at reducing axon retraction and rod cell death in the short-term experimental rhegmatogenous detachments used here in the adult pig. Subretinal fasudil application compared favorably with the experimental ROCK inhibitor Y27632, which our group had tested previously. However, there were some differences. Although both drugs were used successfully at 10 mM (43.7% reduction in synaptic retraction for Y27632 versus 51.3% for fasudil), Y27632 also reduced axon retraction at 1 mM concentrations by 34.5%. 15 In addition, Y27632 at the 10 mM concentration reduced axon retraction in detached retina and the distant attached retina. These data suggested that Y27632 may be more efficacious. The K i 's of the two drugs used here are similar (140 and 330 nM for Y27632 and fasudil, respectively), 23 but their halflives may be different. Nothing is yet known about the half-lives for ROCK inhibitors placed intraocularly, but 10 mg/kg Y27632, when injected intraperitoneally, can cross the blood-brain barrier and has a halflife of 60 to 90 minutes in mouse brain 24 whereas intravenous injection of fasudil is reported to have a half-life of approximately 40 to 45 minutes in rat and human, respectively, in plasma. [25][26][27] A longer half-life for Y27632 would allow time for diffusion of active inhibitor away from the detachment and thereby account for its broader effects. It is known that the half-lives of drugs can be extended by encapsulation in liposomes, and this strategy has been used successfully for intravitreal administration of fasudil. 28 Thus, in the future, liposomal preparations of fasudil will be tested.
The relatively high concentration of fasudil needed, 10 mM, to significantly reduce axon retraction suggests that in addition to a short systemic half-life, fasudil may be cleared rapidly from the intraocular compartment. However, it also raises the concern that fasudil may be acting on secondary targets, such as protein kinase A (PKA), PKG, PKC, and MLC kinase (MLCK; K i s of 1.6, 1.6, 3.3, and 36 lM, respectively) 29 in addition to ROCK I and II. We confirmed that fasudil was working on the RhoA pathway using retinal explants and testing for levels of phosphorylated cofilin and MLC. The fact that phosphorylation is reduced for both proteins while total protein levels were not significantly altered confirms that fasudil is modulating Rho kinase in the RhoA pathway. However, it does not rule out additional secondary effects. In addition to inhibition of secondary kinases, a recent report describes interactions of fasudil with K v 7.4 and 7.4/7.5 channels resulting in increased M-type K þ currents in vascular tissue. 30 Although most neurons do not contain these particular channels, rod and cone cells have been shown to contain message and protein for K v 7.4 and 7.5. 31,32 The function of these proteins, possibly occurring as channels, in photoreceptors currently is unknown.
Our report included, for the first time to our knowledge, an examination of axon retraction after an intravitreal injection of a ROCK inhibitor. The amount of retraction after the intravitreal injection was less, by 28.7%, than after a subretinal injection, 51.3%; however, the reduction was significant. Therefore, fasudil probably can move through the retina to inhibit rod axon terminals. Although fasudil originally was used for cerebral vasospasm, 33 intravitreal fasudil already has been applied to patients with diabetic macular edema and optic nerve damage to successfully interrupt vascular pathology. 34,35 Intravitreal injections of fasudil (10 lM) in patients showed no evidence of intraocular toxicity. Our results, using higher doses, also showed no toxic responses over 2 hours, and, indeed, intravitreal injections after 2 hours of detachment in the pig dramatically decreased the number of pyknotic nuclei in the outer nuclear area in the region of rod cell somata and, thus, had a protective effect on rod photoreceptors for the short term. This result is perhaps not surprising as activated RhoA has been shown to be an inducer of apoptosis, 36 and application of ROCK inhibitors has been reported to reduce apoptosis in the retina. [37][38][39][40][41][42] In the latter study on the RCS rat, a single 10 to 50 mM dose of Y27632 injected intravitreally (1-5 mM estimated final concentration) had the optimal antiapoptotic effect on photoreceptors, a protocol similar to our report. Thus, intravitreal fasudil seems well tolerated by the eye although long-term studies in our model system remain to be done. Because intravitreal injection has a relatively low incidence of complications and is easy to administer in an outpatient environment, it may, in the future, be the preferred route for drug delivery in the setting of retinal detachment.
An unexpected result occurred with fasudil, however, for intravitreal injections after 2 hours of a 4-hour detachment. We had hypothesized that a delayed treatment would decrease retraction of those axons that initiated retraction later. We had demonstrated that Y27632 can reduce rod axon retraction when applied 6 hours after a detachment in an in vitro model of porcine retinal detachment. 13 However, there was no effect on retraction with a delayed fasudil treatment in vivo, even though the drug was effective at reducing rod cell death. At present the explanation is not clear. The timeline of axon retraction appears slower in vitro 13 than in vivo. 15 Possibly, in vivo the initial fast rate of retraction has slowed after 2 hours so an effect is difficult to detect with a short-lived drug. In the cat model of rhematogenous detachment in vivo, synaptic protein label in the ONL is quite abundant in detachments from 1 to 28 days old, 2 suggesting that retraction continues for several days at least, but the rate of axon retraction is unknown. We have preliminary data that RhoA activation is higher in detached retina than in uninjured retina at 10 and 24 hours, but reduced from its peak at 2 hours (Wang J, Zarbin M, Sugino I, Whitehead I, and Townes-Anderson E. Personal observations, 2016). Changes in RhoA activation over time also have been reported in other injured neural tissue. 43 Alternatively, downstream kinases, such as LIMK, may be active at 2 hours, and their activity would not be inhibited by fasudil (Fig. 1A). We have shown previously that LIMK contributes to axon retraction in pig retina 21 and that using inhibitors to ROCK and LIMK has an increased inhibitory effect on axon retraction. It will be instructive to apply an inhibitor to LIMK in a delayed treatment scenario after detachment. Future experiments should be able to resolve these questions.
Our results suggested that there are several possible retinal applications of fasudil. Fasudil could be useful for iatrogenic retinal detachments, such as detachments created for subretinal delivery of stem cells, viral vectors, or visual prostheses. Since the timing of the detachment is predetermined in these cases, it should be possible to inject fasudil into the subretinal or intravitreal space either before or during creation of the detachment. The drug would protect the OPL from damage. The potential value of stabilizing the outer retinal circuitry during iatrogenic detachments is supported by a recent report that described changes in the outer synaptic layers of an enucleated human eye after subretinal placement of a visual prosthesis (Chen J, et al. IOVS 2016;57:ARVO E-Abstract 3732). The disruption reported was likely due to axon retraction. In addition, a recent report described retinal recovery after iatrogenic macular detachment for gene therapy in five patients with choriodemia. 44 Although OCT showed full reattachment, visual recovery was not uniform among the patients. Sensitivity and color vision did not return to baseline after 1 month for approximately a third of the treated eyes. Again, it is possible that OPL damage had occurred. Although we have yet to demonstrate directly that morphologic disruption of the OPL leads to functional problems, anatomy and physiology are closely linked. Therefore, we propose that the success of subretinal procedures may be enhanced by treatment with ROCK inhibitors. Intravitreal fasudil may be additionally useful in retinal degenerations to reduce cell death and in degenerations that exhibit rod axon retraction pathology (RCS rat 45 ; nob2 mouse, a model of congenital stationary night blindness 46 ; retinoschisis knockout mouse 47 ; rat model of oxygen-induced retinopathy 48 ). The retraction observed in these degenerations is morphologically the same as that seen in iatrogenic retinal detachment. If, like detachment, it is caused by RhoA activation, ROCK inhibition may help preserve outer synaptic connections.
There are several avenues that could be pursued to better understand the breadth of possible applications of ROCK inhibitors to retinal injury and disease. Fortuitously, a number of ROCK inhibitors are being tested for ocular use in clinical trials. 49 Their binding constants, half-lives, off-target effects, solubility, and so forth vary. If these drugs prove useful in patient treatment, they can be repurposed and tested as agents for retinal protection. | 6,468 | 2017-05-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Nanofabricated upconversion nanoparticles for photodynamic therapy
We present a novel process for the production of three-layer Composite Nanoparticles (CNPs) in the size range 100-300 nm with an upconverting phosphor interior, a coating of porphyrin photosensitizer, and a biocompatible PEG outer layer to prevent clearance by the reticuloendothelial system. We show that these CNPs produce millimolar amounts of singlet oxygen at NIR intensities far less than other two-photon techniques. ©2008 Optical Society of America OCIS codes: (170.0170) Medical Optics and Biotechnology References and links 1. S. Heer, K. Kompe, H. U. Gudel, and M. Haase, "Highly efficient multicolour upconversion emission in transparent colloids of lanthanide-doped NaYF4 nanocrystals," Adv. Mat. 16, 2102-+ (2004). 2. K. Kuningas, T. Rantanen, U. Karhunen, T. Lovgren, and T. Soukka, "Simultaneous use of time-resolved fluorescence and anti-stokes photoluminescence in a bioaffinity assay," Anal. Chem. 77, 2826-2834 (2005). 3. K. van de Rijke, H. Zijlmans, S. V. Li, T. A. K. Raap, R. S. Niedbala, and H. J. Tanke, "Up-converting phosphor reporters for nucleic acid microarrays," Nature Biotech. 19, 273-276 (2001). 4. M. Zuiderwijk, H. J. Tanke, R. S. Niedbala, and P. Corstjens, "An amplification-free hybridization-based DNA assay to detect Streptococcus pneumoniae utilizing the up-conversion phosphor technology," Clin. Biochem. 36, 401-403 (2003). 5. S. F. Lim, R. Riehn, W. S. Ryu, N. Khanarian, C. K. Tung, D. Tank, and R. H. Austin, "In vivo and scanning electron microscopy imaging of upconverting nanophosphors in Caenorhabditis elegans," Nano Lett. 6-2 (2006). 6. V. P. Torchilin, and V. S. Trubetskoy, "Which Polymers Can Make Nanoparticulate Drug Carriers LongCirculating," Adv. Drug Del. Rev. 16, 141-155 (1995). 7. J. Shan and Y. G. Ju, "Synthesis, characterization and SiO2 coating of NaF4Y:Er3+ upconversion nanophosphors using high temperature ligands," Adv. Mat. Submitted (2007). 8. B. K. Johnson and R. K. Prud'homme, "Flash NanoPrecipitation of organic actives and block copolymers using a confined impinging jets mixer," Australian J. Chem. 56, 1021-1024 (2003). 9. F. Auzel, "Upconversion and anti-stokes processes with f and d ions in solids," Chem. Rev. 104, 139-173 (2004). 10. M. J. Moreno, E. Monson, R. G. Reddy, A. Rehemtulla, B. D. Ross, M. Philbert, R. J. Schneider, and R. Kopelman, "Production of singlet oxygen by Ru(dpp(SO3)(2))(3) incorporated in polyacrylamide PEBBLES," Sens. Actuat. B-Chem. 90, 82-89 (2003). 11. B. A. Lindig, M. A. J. Rodgers, and A. P. Schaap, "Determination of the Lifetime of Singlet Oxygen in D2o Using 9,10-Anthracenedipropionic Acid, a Water-Soluble Probe," J. Am. Chem. Soc. 102, 5590-5593 (1980). 12. D. K. Chatterjee and Z. Yong, "Upconverting nanoparticles as nanotransducers for photodynamic therapy in cancer cells," Nanomed. 3, 73-82 (2008). 13. S. M. Ansell, S. A. Johnstone, P. G. Tardi, L. Lo, S. W. Xie, Y. Shu, T. O. Harasym, N. L. Harasym, L. Williams, D. Bermudes, B. D. Liboiron, W. Saad, R. K. Prud'homme, and L. D. Mayer, "Modulating the therapeutic activity of nanoparticle delivered paclitaxel by manipulating the hydrophobicity of prodrug conjugates," J. Med. Chem. 51, 3288-3296 (2008). 14. P. Zhang, W. Steelant, M. Kumar, and M. Scholfield, "Versatile photosensitizers for photodynamic therapy at infrared excitation," J. Am. Chem. Soc. 129, 4526 (2007). 15. H. Maeda, J. Fang, T. Inutsuka, and Y. Kitamoto, "Vascular permeability enhancement in solid tumor: various factors, mechanisms involved and its implications," Int. Immunopharm. 3, 319-328 (2003). #103213 $15.00 USD Received 27 Oct 2008; revised 8 Dec 2008; accepted 10 Dec 2008; published 22 Dec 2008 (C) 2009 OSA 5 January 2009 / Vol. 17, No. 1 / OPTICS EXPRESS 80 16. M. E. Gindy, S. Ji, T. R. Hoye, A. Z. Panagiotopoulos, and R. K. Prud’homme, "Preparation of Poly(ethylene glycol) Protected Nanoparticles with Variable Bioconjugate Ligand Density," Biomacromol. 9, 2705-2711 (2008). 17. M. E. Gindy, A. Z. Panagiotopoulos, and R. K. Prud'homme, "Composite block copolymer stabilized nanoparticles: Simultaneous encapsulation of organic actives and inorganic nanostructures," Langmuir 24, 83-90 (2008). 18. Y. Liu, K. Kathan, W. Saad, and R. K. Prud'homme, "Ostwald ripening of beta-carotene nanoparticles," Phys. Rev. Lett. 98 (2007).
Introduction
Up-conversion phosphors (UCPs) are ceramic materials in which rare earth atoms are embedded in a crystalline matrix.The materials absorb infrared radiation and up-convert to emit in the visible spectrum through a series of real as opposed to virtual levels as in conventional two-photon dyes.The upconversion mechanism can be described as either sequential excitation of the same atom or excitation of two centers and subsequent energy transfer [1][2][3][4].The emission of UCPs consists of sharp lines characteristic of atomic transitions in a well-ordered matrix.By use of different rare earth dopants, including Er 3+ , a large number of distinctive emission spectra can be obtained that can be tailored to the photosensitizer excitation spectra.Their main advantage is the sequential 2 (or higher) photon nature of the excitation process, which gives rise to the very low power levels associated with upconversion.The excitation intensities we use here of Watts/mm 2 are 10 7 times less than the intensities needed for 2-photon excitation of typical organic dyes [5].The adsorption maximum of the Er 3+ ion is centered at 975 nm and is ideally suited for photodynamic therapy (PDT) since it can be easily excited using very low cost IR CW diode lasers, and it falls in a region of relative transparency for penetration in tissue.Another advantage of UCPs is their resistance to photobleaching.Since these transitions come from rare earth atomic orbitals and not molecular orbitals UCPs do not photobleach as do organic dyes, we have illuminated these materials for days at high levels of emission output and not observed any decrease in the effective quantum yield.Since they emit from the inner I-shell levels of the rare earths the emission lines are relatively harp (10 nm).
While the NIR excitation of UCP particles addresses the problem of tissue light penetration, three remaining problems are: (1) elimination of systemic soluble porphyrin, (2) the efficient capture of photons and production of singlet oxygen, and (3) biocompatible delivery to the cancer site.We address these issues by presenting complex UCP core nanocrystals consisting of an inner UCP core, an intermediate coat of a tetraphenylporphyrin (TPP) which is excited by the up-converted 560 nm light generated by two 980 nm photons absorbed by the UCPs, and an outer layer of polyethylene glycol (PEG) which acts a solubilizing agent and also allows penetration of O 2 and diffusion of singlet oxygen.The PEG also serves to make the nanoparticles biocompatible and can be used for further chemical modification.The control of nanoparticle size is important because carriers with sizes between 100nm and 300 nm having a biocompatible PEG coating concentrate in cancer tissue using the defective endothelial cell lining of the vasculature in fast growing solid tumors [6].This size-based targeting mechanism is termed Enhance Permeation and Retention (EPR).
Results
The core UCP nanocrystals are synthesized by a high temperature ligand exchange reaction [7] which produces nanoparticles of approximately 160 nm in diameter with a hydrophobic TOPOA (trioctyl phosphene-oleic acid) surface coating.The co-encapsulation of the UCP nanocrystals with the organic porphyrin sensitizer and a PEG protective layer to form the complex functional nanoparticle UCPs can be easily done in one step using our Flash NanoPrecipitation process [8].The scheme for this assembly is presented in Fig. 1 containing hydrophobic UCP particles, hydrophobic tetraphenylporphyrin (TPP) , and a block PEG copolymer (BPC) against a water stream as the anti-solvent.Producing supersaturations as high as 1000 in milliseconds initiates non-selective, diffusion limited aggregation and the formation of composite particles with the desired properties.Details of the CNP-UCP assembly can be found in Supplementary Materials.Briefly, the nanocrystals were suspended in THF at a concentration of 1.0 mg/ml, while the BCP consisted of a 7K hydrophobic polycaprolactone (PCL) block and 5K hydrophilic polyethylene glycol (PEG) block.
Analysis of the size distribution of the output of the micromixer using dynamic light scattering for various ratios of BCP:UCP:TPP is shown in Fig. 2. The uncoated UCPs are narrowly distributed with mean size of 160 nm.Addition of the TPP splits the distribution into two populations: a larger composite object with a size of 280 nm which is presumably the fully self-assembled composite nanoparticle (CNP) and smaller objects of roughly 60 nm which are presumably micelles formed by BCP surrounding a core of TPP.Since as the porphyrin concentration increases the porphyrin-only population gains higher and higher representation while the larger objects do not shift in size we surmise that the TPP photosensitizer forms a thin layer around the crystals, with all excess materials going to the porphyrin-only particles, that is, the nanocomposite 280 nm particles form a homogenous and well defined class of objects.
Figure 3 shows the emissions from crystals encapsulated in CNPs as well as the absorption spectrum for those particles.The upconversion spectra display sharp peaks both as unprotected particles in THF and as Fig. 4. Microscopy setup for the singlet oxygen production test using a fluorescent dye assay.Samples containing composite nanoparticles and the fluorescent dye ADPA were loaded into 1-mm wells drilled into glass microscope slides.The dye was imaged epifluorescently, with excitation at 380nm and emission collected from 430-480nm.IR excitation was delivered from below the well, with a 2.5 Watt infrared laser beam focused to a 0.3mm spot.UV and IR excitation light, as well as the visible emissions from the particles were filtered out and only the blue fluorescence signal from the ADPA was recorded for each sample.composite nanoparticles in water.The sharp emissions peaks are typical of rare earth upconversion materials and their emissions from atomic orbital transitions [9].Overall brightness was attenuated by 50% to 90% upon polymer encapsulation.This effect was the most pronounced for high-BCP concentrations, (recipes with 6:1 and 3:1 polymer-to-crystal loading ratios), while low-block copolymer formulations (0.1:1 polymer:crystal) showed the least attenuation.The inclusion of porphyrin in the particles further diminished measured emissions intensity via absorption attenuation.
Following literature protocols [10,11] singlet oxygen production by the CNPs was monitored in a solution containing the fluorescent dye 9,10-anthracenedipropionic acid (ADPA).The fluorescence of dye is quenched upon reacting with singlet oxygen, allowing the time course of singlet oxygen production to be monitored by tracking ADPA concentration.We used an epifluorescent microscopy setup to image the ADPA, while providing IR excitation from below the microscope stage, as depicted in Fig. 4. The method of photographing CNP-ADPA mixtures in glass microwells and then taking an average bluechannel or grayscale intensity proved to be a valid method for measuring ADPA concentration.Pixel intensity was linear in ADPA concentration for all CNP-ADPA preparations.Consequently, changes in average blue-channel or grayscale intensity were used to monitor ADPA concentration as a function of IR exposure time.
Measurable, unambiguous infrared-initiated singlet oxygen production was observed using a composite nanoparticle recipe of 1:1:2 PEG-PCL:UCP:TPP in 3x PBS and 0.05g/L ADPA (100 micromolar).The sample was loaded in a glass microwell and immobilized by a glass cover slip (preparation of the well and sample described in Methods).A 5 second image of the well was captured once every 7 minutes, with a total UV exposure time of approximately 10 seconds per 7-minute cycle.For the first six cycles, no NIR excitation was provided.From minute 42 onward, 20 W/mm 2 of 975nm infrared light were delivered to the well in 7-minute intervals, during NIR exposure the UV excitation was turned off. Figure .5 plots the time course of the average image intensity over the 150 minutes of testing.The first four data points on Fig. 5 are controls with no NIR illumination, with only the 10 second illumination by the 380 nm probe beam to measure ADPA fluorescence.A minor amount of photobleaching from the probe beam is observed.However when the NIR illumination is initiated at 42 minutes a dramatic increase in the bleaching rate and hence singlet oxygen production is observed.As is evident when the bleaching kinetics are plotted as log(intensity) vs. time the bleaching decay is not a simple exponential but rather a clear power-law response with a slope of -1.This finding was consistent with the ADPA bleaching being a bimolecular reaction with the initial dissolved [O 2 ] concentration approximately the same as the [ADPA] concentration of 100 micromolar.If most of the change can be attributed to the IR illumination, this would correspond to an average ADPA bleaching rate of 1.1 x 10 3 mmol / L-min during the IR exposure period.Since the ADPA reaction will not capture all of the produced singlet oxygen, this serves as an underestimate of the singlet oxygen production rate.A control experiment was performed to compare singlet oxygen production in CNPs with and without porphyrin.The particles without TPP showed no bleaching of ADPA, indicating that the porphyrin is a necessary component of this system (data not shown).
Conclusion
We have demonstrated a functional prototype that co-localizes insoluble porphyrin with UCP crystals and efficiently generates singlet oxygen under NIR illumination.The application of UCP materials in photodynamic therapy is an attractive and active area.Chatterjee et.al [12] are the first to demonstrate folate receptor targeting with UCP nanoparticles.Their assembly approach of photosensitizer loading by adsorption into cationic poly(ethyleneimine) polymer coating produced loading levels of 1:15,000 photosensitizer:UCP crystal (weight basis), which is far lower than the 1:3 level that can be produced by our rapid precipitation process.Furthermore, the cationic PEI polymer is known to be cytotoxic in contrast to our PEG surface layer that is known to be biocompatible [13].Zhang et.al have also produce PDT nanoparticles based on UCP nanoparticles [14].Their process involves the trapping of photosensitizer in a thin silica layer deposited on the UCP nanoparticle surface.Their loadings of sensitizer are 1:219, and the bare silica layer on their nanoparticle surfaces are not expected to produce long-circulating nanoparticles.While the PEG coating applied by our Flash NanoPrecipitation process is biocompatible, studies on the biocompatibility of these UCP nanoparticle systems are underway.Also, the demonstration of delivery to tumor sites is also the subject of continuing studies.Particles of the size we have produced will lodge at the site of solid tumors by passing through defects in fast growing tumor vasculature by the EPR effect [15], and we have demonstrated the attachment of targeting ligands to the PEG chains in our nanoparticle constructs [16].However, in vivo demonstration of targeting for these UCP nanoparticles is ongoing research.There are many questions that are still unresolved.The exact ratio of block copolymer, upconverting crystal, and porphyrin to maximize singlet oxygen production per IR photon is unknown.The attenuation of the emitted light from the UCP by the porphyrin coating and the kinetics and fate of singlet oxygen produced by the composite nanoparticle has not been modeled.Time-resolved fluorometry methods could enable the direct imaging of singlet oxygen.Beyond the physical characterization of the system, in vitro and in vivo efficacy studies are of high interest, especially in comparison to the efficacy of existing photodynamic therapies.Lastly, the therapeutic applications of the UCP-porphyrin pairing in a block-copolymer platform are further enhanced by the possibility of exciting the nanocrystals at X-ray wavelengths and the increased tissue penetration of that radiation.While this introduces the trade-off of incurring damage to tissue while initiating the PDT system, it also expands the applicability of PDT to almost any tissue and removes the limitations of optical penetration depth.
Methods
The UCP particles were made by the thermolysis method with trioctylphosphine/oleic acid as capping ligands [7].The nanocrystals were received as a relatively dry, paste-like precipitate, and stored in a sealed polypropylene container at room temperature.The nanocrystals were suspended in tetrahydrofuran (THF), a water-miscible organic solvent, 1-4 weeks after synthesis and drying.Nanocrystal aggregates were broken up with a probe-tip sonicator (Sonics and Materials Vibracell, Sonics and Materials Inc., Newtown, Connecticut), used at high power and 100% duty cycle for 15 minutes.Storage of the nanocrystals did not affect dispersibility within a month of synthesis.After two months, however the UCP crystals appeared to form larger irreversible aggregates.Once the UCP nanocrystals were successfully dispersed, block copolymer (PEG 5k-b-PCL 7k) and porphyrin were added to the suspension within half an hour of sonication.The polymer and organic additives were allowed to dissolve for 20-30 minutes before mixing.Synthesis of the particles was achieved using the Flash NanoPrecipitation method in a two-stream multi-inlet vortex mixer in keeping with the literature procedure [17].
Typical concentrations of the organic stream were 1mg/mL each of UCP nanocrystals, PEG-PCL, and TPP in reagent-grade THF (Aldrich).The organic stream was mixed against Milli-Q water (Millipore, Billerica, Massachusetts) at a 1:9 v/v THF:Water ratio.Both streams were injected into the mixer using electronically controlled syringe pumps (Harvard Apparatus, PHD 2000 programmable, Holliston, Massachusetts).The final compositions of the composite nanoparticle aqueous suspension were 0.01 to 1.0 0 wt% PEG-PCL, 0.10 wt% UCP, and 0.05-0.20 wt% TPP.The assembled particles were then dialyzed against Milli-Q water in Spectra/Por dialysis bags, MWCO 6000-8000 Daltons (Spectrum Laboratories Inc., California) according to the prescribed method [18].Because all particles were made with the same UCP loading, UCP crystal weight fraction will be given as a nominal indicator of composite nanoparticle concentration.The stability of mixed particles was assessed both visually and using dynamic light scattering (DLS).Particles not stabilized by blockcopolymer formed large aggregates and were visibly turbid.Such visual observations were corroborated with large particle sizes in DLS measurements.UCP nanocrystals in THF, as well as undialyzed and dialyzed composite nanoparticles were measured using dynamic light scattering.The suspensions were typically diluted with additional THF or deionized water, as appropriate, to dilutions of approximately 10 -3 wt% UCP -low enough to be in the range where multiple scattering does not influence the measured sizes.The particle sizes and size distributions in these diluted suspensions were measured with a DLS setup comprising: a Nd:YAG double-pumped continuous laser with output at 532nm and 50mW (Coherent Inc., Compass 315M-150mW, 320 micrometer beam, Santa Clara, California); and a photomultiplier tube at a 90-degree collection angle (Brookhaven Instruments, BI-200SM, Holtsville, New York).Data was communicated via a serial connection to an autocorrelator PC card and software program (ALV-Laser Vertriebsgesellschaft mbH, ALV-5000/E, Langen, Germany), which calculates particle size distributions based on decay time distributions assuming hard sphere behavior for the particle diffusivities (Stokes Law).
Fig. 2 .
Fig. 2. Particle size distributions (normalized to constant area) for varying ratios of BCP:UCP:TPP in the turbulent micromixer.The initial distribution of the bare UCPs in hexane is the solid green line.
Fig. 1 .
Fig.1.Schematic representation of Flash NanoPrecipitation method for assembly of the composite nanoparticles incorporating the highly hydrophobic organic photosensitizer, tetraphenylporphyrin (red), 170-nm hexagonal NaYF4:Er3=,Yb 3+ upconversion crystals (green) in a shell consisting of PEG(5k)-b-PCL(7k) amphiphilic block copolymer.All the constituents are initially dissolved or suspended in the organic phase, which is mixed at high speed against a large volume of water (10:1 water:THF), yielding the composite nanoparticle (CNP) structure shown right.
Fig. 3 .
Fig.3.Emission spectrum for upconversion crystals (lower spectrum) juxtaposed with absorption spectrum of tetraphenylporphine (TPP) photosensitizer (upper spectrum).Both spectra were collected from assembled nanoparticles rather than the components in an organic solvent, though the results were virtually identical for the latter case.Both spectra display strong, sharp peaks with overlap of the emission of the UCP and absorption by the TPP in the 550-nm region.
Fig. 5 .
Fig. 5. Time course of ADPA bleaching during NIR-exposure.The right panel shows images from the first 98 minutes of testing, with minimal bleaching in the first three frames (no NIR input), and significant bleaching in the subsequent frames.This is illustrated in the curve in the left panel: the UV needed to image the ADPA dye leads to some baseline bleaching, but the ADPA bleaching takes on a markedly different rate under infrared illumination.We use a log-log scale, showing the power-law behavior of the bleaching. | 4,503.8 | 2009-01-05T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Medicine"
] |
Unravelling the Dust Attenuation Scaling Relations and their Evolution
We explore the dependence of dust attenuation, as traced by the $\rm H_{\alpha}/\rm H_{\beta}$ Balmer decrement, on galactic properties by using a large sample of SDSS spectra. We use both Partial Correlation Coefficients (PCC) and Random Forest (RF) analysis to distinguish those galactic parameters that directly and primarily drive dust attenuation in galaxies, from parameters that are only indirectly correlated through secondary dependencies. We find that, once galactic inclination is controlled for, dust attenuation depends primarily on stellar mass, followed by metallicity and velocity dispersion. Once the dependence on these quantities is taken into account, there is no dependence on star formation rate. While the dependence on stellar mass and metallicity was expected based on simple analytical equations for the interstellar medium, the dependence on velocity dispersion was not predicted and we discuss possible scenarios to explain it. We identify a projection of this multi-dimensional parameters space which minimises the dispersion in terms of the Balmer decrement and which encapsulates the primary and secondary dependences of the Balmer decrement into a single parameter defined as the reduced mass $\mu = \log {\rm M}_{\star} +3.67 [{\rm O/H}] + 2.96 \log (\sigma_v/100~km~s^{-1})$. We show that the dependence of the Balmer decrement on this single parameter also holds at high redshift, suggesting that the processes regulating dust production and distribution do not change significantly through cosmic epochs at least out to z$\sim$2.
fragmentation, hence the formation of (low mass) stars (Schneider et al. 2006 ).
The most important aspect of dust concerned in this work is its ability to scatter and absorb ultraviolet (UV) and optical light emitted by stars and by the ISM, and re-emit them at longer IR wavelengths, ef fecti vely reshaping the whole SED of the galaxy.These processes cause the effects known as dust extinction, dust attenuation and dust reddening (Draine 2003 ).Being able to accurately reverse the effects of dust on galaxy SEDs is essential to accurately determining galaxy parameters and investigating the physics within galaxies that drives their evolution.
Additionally, understanding how dust content scales with other galactic parameters can tell us a lot about how dust forms, and what role it plays in the evolution of galaxies.There are several scaling laws in the literature, which relate different galactic parameters, several of which relate (or may relate) to the dust mass, which in turn is related to the dust attenuation in the galaxies.
The star-forming main sequence (SFMS, Brinchmann et al. 2004 ;Sandles et al. 2022 ) relates the stellar mass to the star formation rate (SFR), and recent studies suggest that this is actually an indirect relation, a by-product of other more fundamental relations (Lin et al. 2019 ;Baker et al. 2022b on resolved scales, andBaker et al. 2023 on integrated scales).The relation between the SFR and the stellar mass is also present on resolved kpc to sub-kpc scales, forming the so-called resolved star-forming main sequence MNRAS 527, 8213-8233 (2024) (rSFMS, S ánchez et al. 2013 ;Cano-D íaz et al. 2016 ;Hsieh et al. 2017 ).
The resolved molecular gas main sequence (MGMS) relates the stellar mass surface density to the molecular gas mass surface density, as found in Lin et al. ( 2019 ), Barrera-Ballesteros et al. ( 2020 ), Morselli et al. ( 2020 ), Pessa et al. ( 2021 ), Ellison et al. ( 2021a , b ), and Baker et al. ( 2022b ).The molecular gas mass surface density is also related to the SFR surface density, and this is known as the Schmidt-Kennicutt law (SK, Schmidt 1959 ;Kennicutt 1998 ), which is understood as the molecular gas acting as fuel for star formation (Kennicutt 1998 ).
The fundamental metallicity relation (FMR) empirically relates the stellar mass, SFR and metallicity of the ISM, and has been reported by several authors (Mannucci et al. 2010 ;Nakajima & Ouchi 2014 ;Salim et al. 2014 ;Gebhardt et al. 2016 ;Hunt et al. 2016 ;Hirschauer et al. 2018 ;Curti et al. 2020a ;Baker et al. 2022a ), with the metallicity depending primarily on the stellar mass and showing a secondary, inverse dependence on the SFR.This is a generalization of the mass-metallicity relation (MZR, Lequeux et al. 1979 ) which is the correlation between the stellar mass and the metallicity.The FMR indicates a non-linear relationship between the stellar mass, metallicity and SFR, with the metallicity decreasing with SFR for low masses and becoming almost independent of SFR at higher masses.This relation is believed to hold true up to a redshift of z ∼ 3 (Cresci, Mannucci & Curti 2019 ;Sanders et al. 2021 ).
Various works have investigated the dependence of the dust attenuation, traced by the Balmer decrement (H α /H β ), on galactic properties.In particular, Garn & Best ( 2010 ) studied a local sample of galaxies at z ∼ 0.7, investigating how the dust attenuation determined using the Balmer decrement method depends on the stellar mass, SFR and gas-phase metallicity.They determine a positive, non-linear correlation between the dust attenuation and each of these quantities.Ho we ver, to understand which parameter is most important and driving the dust attenuation, and which of the other parameters only contribute a secondary dependence through the dependence on the other dominant parameter, if there is one, they employ principal component analysis (PCA).This method identifies which parameter causes the most variation in the dust attenuation.They determine the most important parameter to be the stellar mass, and they claim that the dependence of the dust attenuation on the other parameters are all secondary due to their dependence on stellar mass.They argue that the galaxies with a larger stellar mass will ha ve b uilt up a larger reservoir of dust, since dust is produced in stars (Draine 2003 ).Ho we ver, the PCA method is only accurate when there is a simple linear relationship between the quantities, which is not necessarily present here.
Several other works have identified correlations between the dust attenuation and different galactic parameters, such as SFR (Garn et al. 2010 ), stellar mass (Pannella et al. 2009 ), and metallicity (Asari et al. 2007 ); ho we ver, fe w hav e inv estigated if the correlation identified is a direct correlation or if it is an indirect correlation introduced by secondary correlations between these and other galactic parameters.
To investigate how the dependencies between the dust attenuation and the galactic parameters evolve with redshift, giving insight into both the dust production mechanisms and how these evolve, works such as Shapley et al. ( 2022 ) have compared samples of local galaxies and higher redshift galaxies around cosmic noon.Cosmic noon, at z ∼ 2 −3, is the period where the cosmic average SFR was largest, and about 13 per cent of the stellar mass content of today's galaxies was formed (McLeod et al. 2021 ), making this epoch interesting for examining star formation mechanisms.
Sev eral studies hav e identified the dependence of the dust attenuation on the stellar mass as the most important, and some have ev en observ ed this relation not evolving up to a redshift of about z ∼ 2 (Whitaker et al. 2017 ;McLure et al. 2018 ;Shapley et al. 2022 ), and more recently up to z ∼ 6.5 (Shapley et al. 2023 ).In particular, Shapley et al. ( 2022 ) identified no significant evolution in the relationship between the dust attenuation (using both the Balmer decrement and the UV continuum) and the stellar mass between Sloan Digital Sky Survey (SDSS) ( z ∼ 0) and MOSFIRE Deep Evolution Field (MOSDEF) ( z ∼ 2.3) galaxies, and argue that this lack of evolution can be explained by considering the evolution of the other parameters, such as metallicity, dust mass and gas mass.
There is some evidence that at z > 3 the dust attenuation evolves to lower values for a given stellar mass (Fudamoto et al. 2020 ).Following this, Bogdanoska & Burgarella ( 2020 ) compared samples of galaxies from the literature in the redshift range 0 < z < 10 to identify how the UV dust attenuation versus stellar mass relation may evolv e.The y assumed a linear fit to the relation for all the samples and then tracked the evolution of this gradient with redshift.From this, they conclude that the relation between the dust attenuation and stellar mass evolves across the entire redshift range investigated, with the gradient of the linear fit peaking at cosmic noon and decreasing at higher and lower redshifts.This is in contrast to the results from Shapley et al. ( 2022 ), ho we ver, this work considered different samples of galaxies.
In this work, we wanted to first investigate how the Balmer decrement depends on the different galactic parameters, and in contrast to previous works, disentangle the various dependencies from primary and secondary relations using advanced statistical techniques, such as random forest (RF) regression and partial correlation coefficient (PCC) analysis.This part of the work was applied to large sample of local galaxies observed using the SDSS (York et al. 2000 ).
We also wanted to see if these relations evolve with redshift by comparing the local galaxies with samples of galaxies at higher redshift.In this work, we used two higher redshift samples, one observed by the K -band Multi Object Spectrograph (KMOS, Sharples et al. 2013 ) on the VLT, and the other observed by the Multi-Object Spectrometer for Infrared Exploration (MOSFIRE, McLean et al. 2012 ) on the Keck I telescope.
The layout of this paper is as follows.In Section 2 , we discuss our data sources.In Section 3 , we present the physics behind the determination of the dust attenuation from the emission-line fluxes.In Section 4 , we make theoretical predictions on the most important galactic properties in determining the dust attenuation through known scaling laws.In Section 5 , we present the statistical analysis tools used in this work.In Section 6 , we present results of our statistical analysis and comparison between the samples of galaxies.In Section 7, we conclude the main findings of this work.
DATA A N D S A M P L E S E L E C T I O N
In this work, we consider data from three spectroscopic surv e ys: SDSS, i.e. local galaxies at z ∼ 0, and KMOS Lensed Velocity and Emission Line Re vie w (KLEVER) and MOSDEF, i.e. galaxies at z ∼ 1 −3.
Local sample
To understand the dependency of the Balmer decrement on galaxy properties we first explored local galaxies, using the Sloan Digital MNRAS 527, 8213-8233 (2024) Sk y Surv e y (SDSS, York et al. 2000 ) Data release 12 (DR12, Alam et al. 2015 ).SDSS uses a 2.5-m wide-field telescope at the Apache Point Observatory in New Mexico, US, utilizing the u , g , r , i , and z bands (Fukugita et al. 1996 ).The spectra of the objects are obtained by a pair of multiobject double spectrographs with 3-arcsec diameter fibres, producing a spectral co v erage of 3800-9200 Å (Abazajian et al. 2009 ).
The spectroscopic redshifts, emission-line fluxes, stellar masses, and SFRs of the SDSS galaxies are calculated by the MPA-JHU group1 providing measurements for 927 552 galaxies at redshifts z < 0.7.
Emission-line fluxes and nebular velocity dispersion
To determine the emission-line fluxes of the galaxies in the SDSS surv e y, the MPA-JHU group subtracted the best-fitting stellar population model of the stellar continuum from the spectra for each galaxy, then fit the nebular emission lines (Tremonti et al. 2004 ).To better measure the weak nebular lines, they fit Gaussians to the spectra simultaneously whilst requiring all the Balmer lines have the same width and velocity offset, and similarly for the forbidden lines, whilst also taking into account the spectral resolution.The nebular velocity dispersion was calculated from the spectra using the width of the emission lines.
Stellar mass
The stellar masses are taken from the MPA-JHU catalogue, and are calculated by comparing the observed SEDs of the galaxies to a large number of model SEDs, which then provides information on the stellar population of the galaxies, which are used to calculate the mass-to-light ratio, and hence the stellar mass of the galaxy (Kauffmann et al. 2003a ;Salim et al. 2007 ) assuming the Kroupa ( 2001 ) initial mass function (IMF).These stellar masses were then converted to the Chabrier ( 2003 ) IMF.
Star formation rate
The SFR of galaxies can be calculated in several ways, one of which uses the H α luminosity.Young massive stars, such as O/B, emit largely in the UV ionizing the gas around them, producing H II regions where recombination produces Balmer lines, with the H α line being the brightest.Assuming all H α emission is produced in H II regions around O/B stars and all the photons emitted by the star ionise the surrounding Hydrogen, the SFR can be shown to be proportional to the dust corrected H α luminosity (Kennicutt & Evans 2012 ).The SFR data taken from the MPA-JHU catalogue was calculated using this method and assuming the Kroupa ( 2001 ) IMF, as is described in Brinchmann et al. ( 2004 ), with the H α flux being dust corrected using the Balmer decrement method.These SFRs were then converted to the Chabrier ( 2003 ) IMF.
One issue of this method within the context of this paper is that the SFR derived from H α will be strongly correlated with the Balmer decrement (H α /H β ), due to the H α flux being dust corrected using the Balmer decrement, as well as the fact that H α appears in both the SFR and the Balmer decrement.This cross-correlation between the SFR derived from H α and the Balmer decrement is discussed in Appendix A .To a v oid this spurious correlation affecting our results We calculated an additional SFR tracer using the D4000 break following the methodology in Bluck et al. ( 2020a ) to further investigate the importance of SFR in determining the Balmer decrement.This method is used in works such as Brinchmann et al. ( 2004 ) and Bluck et al. ( 2020a ) when measurements on H α are not available.The produced calibration and resulting analysis are shown in Appendix B , which support the conclusions of the main text.
Metallicity
The gas-phase metallicity of the galaxies in the SDSS surv e y was calculated for this work through the strong-line calibration method presented in Curti et al. ( 2020a ).These calibrations between metallicity and ratios of strong emission lines were determined by using the 'direct' electron temperature ( T e ) method, as re vie wed in, for example, Maiolino & Mannucci ( 2019 ), where electron temperatures, T e , are measured by stacking thousands of local galaxies to detect auroral lines and then used to infer the metallicity.The strongline ratios used in this work are shown in Table 1 , following the same definitions as are used in Curti et al. ( 2020a ).
There are various caveats to the different diagnostics, such as some being double valued (Curti et al. 2020a ).To mitigate these issues, we combine different combinations of the diagnostics for each galaxy as is done in Curti et al. ( 2020a ).In this work, we chose to use the gas-phase metallicity relative to the solar metallicity, [O/H] = 12 + log(O/H) -8.69 (Asplund et al. 2009 ).
Inclination
In this work, we explore the dependence of dust attenuation on galaxy inclination.The inclination measurements for the galaxies in this sample were extracted from the Simard et al. ( 2011 ) morphological catalogue.To determine the morphological parameters such as the galaxy inclinations, they used a galaxy model with the sum of a pure exponential disc and a de Vaucouleurs bulge (S érsic index n b = 4).
Selection criteria
The DR7 sample consists of 927 552 galaxies.To reduce the effect of noise contributed by the sky background, the detector and the MNRAS 527, 8213-8233 (2024) fluctuations in the source itself, we set signal-to-noise cuts on the line flux es.F ollowing Mannucci et al. ( 2010 ) and Hayden-P a wson et al. ( 2022), we adopt a high signal-to-noise ratio (S/N) on the H α line ( > 20 σ ), which is essentially equi v alent to a cut in SFR, and ensures that we have detections of H β and other metal lines needed to measure the metallicity without having to impose constraints on their S/N, which would bias the metallicity of the sample.We, ho we ver, do set a signal-to-noise cut on H β of 2 σ to ensure the measured Balmer decrement is reliable.Imposing any higher signal-to-noise cuts on the H β line would bias our sample to less dusty galaxies, since dustier galaxies would attenuate the H β line more and so tend to have weaker H β detections.If the signal-to-noise cut on H β was increased to 3 σ , there would be 18 fewer galaxies in our sample, a 0.08 per cent decrease, which is such a small difference our results would not be affected by such a change in the selection criteria.In addition to these signal-to-noise cuts, when determining the metallicities of the galaxies, any diagnostic with lines detected abo v e the three σ lev el were combined to calculate the metallicity following Curti et al. ( 2020a ).
The metallicity diagnostics used in this work relied on only star-forming galaxies (SFGs) in the sample, which were selected using BPT emission-line diagnostic diagrams (Baldwin, Phillips & Terlevich 1981 ), which compare the [O III ] λ5007/H β line ratio against the [N II ] λ6583/H α line ratio.We did not apply signal-tonoise cuts on the galaxies for the [N II ] and [O III ] lines, since this would bias the metallicity measurements of the galaxies.The signalto-noise cut of 20 on the H α line implies the other metal lines used in Curti et al. ( 2020a ) are detected.For the few cases, in which some of the metal lines are not detected, the upper limit on these lines can still be used to constrain the metallicity (Curti et al. 2020a ).We used the Kauffmann et al. ( 2003b ) demarcation line to define SFGs in this work.Due to the spectral resolution of SDSS being 2000, we also selected only galaxies with log nebular velocity dispersion abo v e log 10 ( σ H α [ km / s ]) > 1 .75.
To ensure not just the central region of the galaxies was being sampled, we enforced the projected fibre aperture to be at least 2 kpc, which set a lower limit on the redshift of z = 0.043, since the aperture diameter was 3 arcsec.We set no upper cut on the redshift.As is explained in Section 6.1.1 , the galaxies were also selected such that their inclination was less than 45 • , to minimize the effect of increased dust attenuation with increased inclination.The result of these selection criteria reduced the local sample to 21 488 galaxies, with a maximum redshift of z = 0.308 and a median redshift of z = 0.104.
High-redshift samples
To see how the relations between dust attenuation and other galaxy properties evolve, we studied samples of galaxies at higher redshifts observed spectroscopically in the near-IR.The first sample we investigated at higher redshift was the near-infrared KMOS Lensed Velocity and Emission Line Re vie w (KLEVER, Curti et al. 2020b ).KLEVER is a European Southern Observatory Large Programme which has observed 192 galaxies in the redshift range 1.2 < z < 2.5 using the KMOS) on the VLT (Sharples et al. 2013 ).KMOS is a near-IR multiobject spectrograph using integral-field units (IFUs) observing in the Y , J , H , and K bands.
We additionally used galaxies taken from the MOSDEF (Kriek et al. 2015 ) surv e y to compare with our local galaxies (SDSS).This surv e y is observed with the Multi-Object Spectrometer for Infrared Exploration (MOSFIRE, McLean et al. 2012 ) on the Keck I telescope observing in the Y , J , H , and K bands.The MOSDEF surv e y measured the rest-frame optical spectra of 1824 galaxies in the three redshift intervals, 1.37 < z < 1.7, 2.09 < z < 2.61, and 2.95 < z < 3.80, which were selected such that the brightest rest-optical emission lines fall within the atmospheric transmission windows.Because of this, the galaxies in the higher redshift interval (2.95 < z < 3.80) were ignored in this analysis, as the H α line was redshifted out of all the bands used on MOSFIRE.In this work, we only used the galaxies in the 2.09 < z < 2.61 redshift bin.
The IFU observations from KMOS allowed us to perform very accurate measurements of the Balmer decrement at high redshift.MOSFIRE instead observes emission lines through single slits which require slit loss corrections, which are prone to vary with operation time.Ho we ver, the ef fect of these slit loss corrections is shown to be insignificant in determining the Balmer decrement due to the relative flux calibrations between different bands agreeing to within 13 per cent by comparing the MOSFIRE spectra with photometric SED models (Kriek et al. 2015 ).The line flux measurements were additionally compared with 3D-HST grism line fluxes in Kriek et al. ( 2015 ) finding good agreement.The grism spectra do not hav e an y slit aperture, much like IFU observations, which suggests the slit loss corrections are robust and the aperture affects are not significant.
Additionally, the galaxies in the KLEVER surv e y were selected using H α emission in the rest-frame optical, from the 3D-HST surv e y (Curti et al. 2020b ).The galaxies in the MOSDEF surv e y are instead selected using the flux in the H band (dominated by the optical, rest-frame stellar continuum), aiming at obtaining a flat distribution in stellar mass (Kriek et al. 2015 ).The two selection criteria may potentially have different selection (bias) effects in terms of dust attenuation.The MOSDEF selection criteria do lead to a mass complete sample at log ( M /M ) < 10.5, ho we ver, at higher stellar masses the surv e y had its lowest mass completeness for red dusty SFGs (Kriek et al. 2015 ;Runco et al. 2022 ).Hence, the dust attenuation of the galaxies in the MOSDEF surv e y will be lower than expected at stellar masses above 10 10.5 M .The MOSDEF sample has a lower mass completeness limit, ho we ver, with Sanders et al. ( 2021 ) not considering galaxies below 10 9 M due to them having a lower spectroscopic success rate when analysing the stellar MZR, and Shi v aei et al. ( 2015 ) removing galaxies with stellar mass below 10 9.5 M due to poor photometric redshift measurements when analysing the SFR-stellar mass relation.
More recently, results have been reported for Balmer decrements at even higher redshift by using NIRSpec-JWST data (e.g.Shapley et al. 2023 ); ho we ver, not all the information required for our analysis (SFR, metallicity, and velocity dispersion) is provided for those galaxies, so they are not considered in this study.Moreo v er, the strongly wavelength-dependent slit losses of the small NIRSpec shutters, convolved with the galaxy sizes, make the uncertainties of line ratios spanning a large wavelength range uncertain.For these reasons, in this work, we mostly focus on the KLEVER and MOSDEF samples at z ∼ 1-2.
In the following section, we provide additional information on how the measurements of the galaxy parameters used in this work were extracted from these two surveys.
KLEVER
The emission-line fluxes and widths were measured for the KLEVER surv e y as described in Hayden-P a wson et al. ( 2022 ).To determine the emission-line fluxes and widths, first a linear continuum was subtracted from the spectrum whose slope and normalization were free parameters.They did not fit a proper stellar continuum since the observed continuum was so faint.All emission lines within the same observing band were fit simultaneously with Gaussian curves of equal width, but across bands the widths were allowed to vary to account for different resolving powers within each band in KMOS.The nebular velocity dispersion was calculated in a consistent fashion with SDSS.
To correct for the stellar absorption of the Balmer emission lines (namely H α and H β ), we calculated the stellar continuum from photometry for each galaxy.We used the Bayesian SED-fitting code { \ sc BEAGLE } (Che v allard & Charlot 2016 ) to perform SED modelling of publicly available photometry (Merlin et al. 2016 ;Criscienzo et al. 2017 ;Brada č et al. 2019 ) for all of the objects in our sample with the aim of producing continuum-only spectra (without emission lines).We use a Chabrier ( 2003 ) IMF and assume a delayed exponential star formation history.Redshifts were fixed to their spectroscopic values.These continuum spectra were then normalized to the continuum around the Balmer lines in each band and subtracted from the integrated spectra.The H α , H β , and [N II ] doublet lines were simultaneously fit with Gaussians which had their redshift and widths tied.Three Gaussians were fit to deblend the H α and [N II ] doublet, and the amplitudes of the two [N II ] Gaussians were fixed to have a ratio of 3:1.
The galaxy stellar mass is taken from the KMOS 3D data release (Wisnioski et al. 2019 ) for KMOS, calculated through SED fitting following the methodology in Wuyts et al. ( 2011 ), similar to that in the SDSS sample.For the lensed galaxies in the sample, the stellar masses were calculated in Concas et al. ( 2022 ) by using SED fitting following the methodology in Curti et al. ( 2020b ).
We calculated the metallicity measurements for KLEVER following the same choice of metallicity diagnostics as for SDSS Curti et al. ( 2020a ).
The KLEVER sample provided by Hayden-Pawson et al. ( 2022 ) consisted of 192 galaxies.The only selection criteria we enforce are that both the H α and H β lines are clearly detected.As a criterion, we conserv ati vely use both the uncertainty from the Monte Carlo fitting (requiring an S/N > 2) and the nominal uncertainties on the spectrum, summed in quadrature o v er the FWHM of the line and centred at the nominal line location (requiring S/N > 3).The uncertainty from the Monte Carlo fitting of the fluxes is determined by perturbing the spectra with Gaussian noise randomly extracted from the noise spectra, repeating the fit one hundred times, and taking the 16th and 84th percentiles of the 2.5 σ clipped resulting distribution of fluxes.In addition to this, when determining the metallicities of the galaxies, any diagnostic with lines detected above the 3 σ level were combined to calculate the metallicity following Curti et al. ( 2020a ).We then required the galaxies to have at least two diagnostics available, else the resulting metallicity may have strong biases.
The BPT selection that was applied to the galaxies in the SDSS surv e y is not valid at this redshift, since the demarcation derived by Kauffmann et al. ( 2003b ) is for local galaxies.Measurements of galaxies around z ∼ 2 in these BPT diagrams have identified an offset with the SDSS galaxies which can be mass dependent, for e xample, Shaple y et al. ( 2015 ), and can be due to the higher redshift galaxies having a harder stellar ionizing radiation field and a higher ionization parameter (Steidel et al. 2014 , see references therein).To remo v e AGNs and ensure only SFGs were considered in our analysis of the galaxies from the KLEVER surv e y, we first used the information from the [N II ]-BPT diagram valid at the redshifts we are considering (K e wle y et al. 2013 ), remo ving galaxies that were largely abo v e the AGN demarcation line.We then implemented a visual inspection of all spectra to identify type 1 AGNs (via the presence of broad components under Balmer lines, mainly H α ).We additionally cross-matched X-ray catalogues, classifying a galaxy as an AGN and removing from our sample if the X-ray luminosity was greater than 2 × 10 42 erg −1 .After these cuts, 51 galaxies were left in our sample.
MOSDEF
The emission-line fluxes were measured for the MOSDEF surv e y as described in Kriek et al. ( 2015 ).The systemic redshift was measured from the highest signal-to-noise emission line, usually the H α or the [O III ] λ5008 line.Line fluxes were measured by fitting Gaussian profiles o v er a linear continuum, where the centroids and widths were allowed to vary.Uncertainties on the line fluxes were estimated using a Monte Carlo method where the spectrum was perturbed according to the error spectrum and the line fluxes were remeasured.This process was repeated 1000 times, and the uncertainty on the line flux was taken to be the 84-16th percentile range of the resulting distribution.This method also produced the emission lines respective FWHMs, which were converted to the velocity dispersion.This data are publicly available. 2ince a stellar continuum was not able to be measured, the Balmer lines were corrected for underlying stellar atmospheric absorption (typically from type A stars) by modelling the galaxy stellar populations, as described in Reddy et al. ( 2015 ).
The stellar mass measurements for the MOSDEF galaxies were calculated by Sanders et al. ( 2021 ) using SED fitting to the photometry for the galaxies in the z ∼ 2.3 redshift interval.This data was made available upon direct request to the authors.
The metallicity measurements for MOSDEF were calculated in this work following the same metallicity diagnostics as for SDSS and KLEVER from Curti et al. ( 2020a ).The median of the metallicities (12 + log(O/H)) for the galaxies selected from the MOSDEF surv e y calculated in this work is 8.43 with a 16-84th percentile range of 0.23.The metallicities calculated for the same sample of galaxies in Sanders et al. ( 2021 ) have a median of 8.47 with a 16-84th percentile range of 0.35.The metallicity measurements calculated in this work are slightly lower with a narrower spread than those from Sanders et al. ( 2021 ).Ho we ver, to reduce biases in the choice of metallicity diagnostics used, since Sanders et al. ( 2021 ) calculated their own calibrations for their metallicity measurements, we chose to use the same method of determining metallicities for all galaxies used in this work, following Curti et al. ( 2020a ).
AGNs were identified and remo v ed from the galaxies used in this work provided by Sanders et al. ( 2021 ) using their X-ray and infrared properties, as well as their value of log ([N II ]/H α ) > −0.3 (Coil et al. 2015 ;Azadi et al. 2017 ).
The only flat S/N cuts were made on the H α and H β lines, setting them to be greater than three for each line.In addition to this, when determining the metallicities of the galaxies following the same metallicity diagnostics as for SDSS and KLEVER from Curti et al. ( 2020a ), any diagnostic with lines detected above the 3 σ level were combined to calculate the metallicity, requiring the galaxies to have at least two diagnostics available.These cuts reduced the sample to 188 galaxies.
BA L M E R D E C R E M E N T, R E D D E N I N G , A N D D U S T A T T E N UA T I O N
In this work, to measure the dust attenuation, A λ , from observational data we use the Balmer decrement method.The Balmer decrement is defined as the ratio of the flux from the H α to the H β emission lines.If MNRAS 527, 8213-8233 (2024) we assume a Case B recombination, temperature of T = 10 4 K and an electron density of n e = 10 2 cm −3 , as is done in many similar studies (Garn & Best 2010 ;Piotrowska et al. 2020 ;Reddy et al. 2020 ), the Balmer decrement has an intrinsic value of 2.86.Recent work (Tacchella et al. 2022 ) suggests that the intrinsic Balmer decrement may be slightly higher when the contribution to the Balmer decrement from collisional ionization is taken into account.
To determine the dust attenuation A λ from these Balmer-line fluxes, we first define the attenuation curve, k λ , related to the dust attenuation and the reddening, E ( B − V ), through the definition (1) Some attenuation laws (Calzetti et al. 2000 ;Reddy et al. 2015 ) are derived using empirical methods consisting in comparing the observed galaxy's SEDs with SEDs of galaxies which are assumed to be unattenuated.Another method to determine the attenuation law is SED fitting to a model of galaxy spectra built theoretically (Buat et al. 2012 ;Kriek & Conroy 2013 ).Both these methods are explained in depth in the re vie w by Salim & Narayanan ( 2020 ).
It can be shown that the dust attenuation A λ is related to the Balmer decrement through where
E X P E C T E D A T T E N UA T I O N D E P E N D E N C I E S O N G A L AC T I C P RO P E RT I E S
Considering the galactic scaling laws from the literature, it is possible to make theoretical expectations on how the dust content, hence the dust attenuation and its observational proxy, the Balmer decrement, scales with the galactic parameters discussed so far.We expect that the dust attenuation, A λ , scales with dust mass, M d , as well as geometric factors, γ g .Hence, A λ ∝ M d γ g .Geometrical factors, such as the configuration of the stars, gas and dust within the galaxies, are difficult to constrain and are not considered in depth in this work.Ho we ver, we can relate the dust mass to the other galactic properties.First, the dust mass scales with the mass of the gas in the g alaxy, M g , through the dust-to-g as ratio, DGR, allowing us to write M d = DGR × M g .Additionally, the dust mass scales with the mass of the metals, M Z , through the dust-to-metal ratio, DZR, allowing us to write M d = DZR × M Z .We therefore can relate the DGR to the gas metallicity ( Z g = M Z M g ) and DZR, giving We can relate the gas mass to the stellar mass through the MGMS (Lin et al. 2019 ), M g = k MGMS × M , where k MGMS is a proportionality constant.Hence, we can write the following equation to determine how the dust mass, and hence dust attenuation, should scale with the galactic parameters where the metallicity Z g depends strongly on the stellar mass, and has a secondary inverse correlation with SFR (FMR, Curti et al. 2020a ).Since the DZR is approximately constant these relations suggest the stellar mass will be the most important parameter in determining the dust mass, and so the dust attenuation.This follows from the assertion that all dust is produced in stars, and so if a galaxy has a larger stellar mass, it will likely have more dust.Equation ( 4) also suggests the dust attenuation depends directly on the gas metallicity, Z g , and indirectly on SFR from the inveserse secondary dependence of Z g on the SFR (FMR).
DATA A NA LY S I S M E T H O D S A N D S TAT I S T I C A L A NA LY S I S
Based on the simple modelling and assumptions described in Section 4 , we have identified some of the galaxy properties which should be observationally more strongly related to dust content -this includes the stellar mass, SFR and metallicity, as is also suggested in Garn & Best ( 2010 ).To determine which are most important for our local sample of galaxies (SDSS), in this work, we combine PCC analysis and RF analysis.These two methods are described in the following sections.
PCC analysis
PCC analysis (Lawrance 1976 ) is a useful tool to describe the correlation between two quantities whilst controlling for the others.This allows us to disentangle primary correlations from indirect, secondary, correlations.
The PCC for variable A with variable B, fixing for variable C, ρ AB | C , is related to the Spearmann rank correlation coefficient between A and B, ρ AB , and other combinations of the correlations between these variables.Specifically, as in Lawrance ( 1976 ).We recall that the use of the Spearmann rank correlation is advantageous o v er Pearson correlation since the Spearmann rank correlaion first rank orders the parameters, which reduces the assumption of linearity between the parameters and instead fa v ours monotonicity, which is useful in this work due to the non-linearity of many of the predicted relations (Bluck et al. 2020a ;Baker et al. 2022b ).See Baba, Shibata & Sibuya ( 2004 ) for further details.
The PCCs can be expanded to include more than three variables by using the methods provided in the pingouin (Vallat 2018 ) package.Yet, controlling for only the two most important variables is often adopted, as this maximizes performance and accuracy.
These coefficients can also be used to identify the direction of maximum variance of variable C in the parameter space defined by parameters A and B. On a plot of variable A on the y -axis against variable B on the x -axis, with variable C on the z -axis (e.g.colour-coded), an arrow can be drawn in the x -y plane with angle θ clockwise from the positive y -axis, denoting the direction of maximum variation, or largest gradient, in variable C. Such arrow angles can be quantified by using the PCCs through the following equation: adapted from Piotrowska et al. ( 2020 ) and Bluck et al. ( 2020a ).
To determine the error on the PCCs and θ in this method when applied to the galaxies in the samples, bootstrap random sampling was used, taking 100 random samples of the data with replacement, with each sample the same size as the original data set, and computing the standard deviation on these results, as is done in Baker et al. ( 2022b ).
We note that PCCs can provide a useful indication of the direct correlations as long as these are monotonic.
RF analysis
In this work, we also use RF analysis.This is a widely used machinelearning method, which uses decision trees to determine which parameters are most important in predicting the target variable for a set of data.The decision trees work by trying to reduce the Gini Impurity at each branch of the decision trees (Pedregosa et al. 2011 ).The parameter importances can then be determined by averaging their contribution to the decrease in Gini Impurity from all of the decision trees in the forest, representing which parameters were used most to predict the value of the target variable.
We used RF regression to predict the parameter importances in determining the target variable (the Balmer decrement in this work).The data are split into a train and test sample, with a 50:50 split, where the train sample is used to train the regressor, and the test sample used to e v aluate the accuracy of the regressor.We used the RF regressor from the { \ sc PYTHON } package Scikit-learn (Pedregosa et al. 2011 ).
Compared to PCC analysis, RF analysis does not require the variables to have a monotonic relationship and can simultaneously explore the dependence on multiple intercorrelated quantities (Bluck et al. 2020a , b ).PCC analysis additionally tells us the direction of the dependence, where RF does not.
To maximize the efficiency and accuracy of the regressor, we finetune specific e x ecution parameters within the RF function, known as hyperparameters.In this work, we fine-tuned the hyperparameter dictating the minimum number of samples allowed to exist at the end of a decision tree, in other words, controlling how many splits the decision tree is allowed to make when training the regressor, which in turn controls for the size the decision tree is allowed to grow to.This hyperparameter is known as the minimum number of samples on the final leaf, where leaf refers to a final node of the decision tree.If this hyperparameter is set too low, the regressor has a tendency to o v erfit to the training sample by splitting at every opportunity until the training data is unphysically fit, ho we ver, if the v alue is too large the accuracy of the regressor is low, as it has not had enough time to fit to the training data.The results of the fine-tuning for the local galaxies are presented in Appendix C .
The error on the determined importances were calculated by repeating the whole process 100 times, re-splitting the data and retraining the regressor each time.The error on the importances was then taken as the standard deviation of the calculated importances for each parameter.
For further detail on RF analysis, see Bluck et al. ( 2022 ).
R E S U LT S
In this section, we present the results of our statistical analysis using both PCC and RF.We also explore the dependence of the Balmer decrement on the various parameters and identify projections of these multidimensional parameter spaces that minimize the scatter of the individual relations, hence finding analytical relations that simultaneously describe the dependence of the Balmer decrement on multiple galactic quantities.
We then use these projections to compare the derived analytical relations between local galaxies (SDSS) and galaxies at z ∼ 1 −3 (KLEVER and MOSDEF).
Random forest
We investigate the importance of various galactic parameters in predicting the Balmer decrement using RF regressors in order to explore the theoretical framework described above.Specifically, we investigate the importance of the following galactic parameters: stellar mass ( M * ), velocity dispersion inferred from H α ( σ H α ), gas-phase metallicity ([O/H]), SFR calculated using SED fitting (SFR SED ), inclination of the galaxy (i) and a control random variable (R), calculated using the { \ sc NUMPY } (Harris et al. 2020 ) random number generator uniformly between 0 and 1, to test how valid the calculated importances are.The details of the tuning of the hyperparameters used in the RF, in order to increase the accuracy of the regressor, can be found in Appendix C .
The importance of the velocity dispersion of the stars is additionally investigated in Appendix D , showing it has a similar importance compared to the nebular velocity dispersion, ho we ver, stellar velocity dispersion data were not available for the higher redshift galaxies, and so was not used further in this work.
We do not include the SFR inferred from the H α emission line as the same quantity enters into the Balmer decrement.Together also with the fact that the H α flux is further corrected for extinction through the Balmer decrement, these aspects result in a spurious correlation with the Balmer Decrement itself.This is, ho we ver, discussed in Appendix A .
The RF regressor was first run on our sample of local galaxies with no pre-selection on their inclination.The importance of each parameter, along with its error, in determining the Balmer decrement is shown in Fig. 1 the stellar mass is the most important parameter in determining the Balmer decrement by far (consistent with previous studies e.g.Garn & Best 2010 ), followed by the inclination, i .
From Fig. 1 , we can see that the inclination is very important.This makes sense physically, since if we see a galaxy edge-on, the flux from the galaxy will encounter more dust on average as it travels to us compared with if the galaxy was face-on, implying the dust attenuation will be larger.The effect of inclination on the observed galaxy properties has been studied in depth for the galaxies in the SDSS surv e y, for e xample, Maller et al. ( 2009 ), which investigated the correlation between inclination and both the colour and size of the galaxies, in order to determine inclination corrections to these parameters.They show that the correction in the g band can reach up to 1.2 mag, with the corrections not depending much on the galaxy luminosity, but depending strongly on the S érsic index.Hence, this importance is not intrinsic to how the dust attenuation or dust content is related to the galaxy properties, but simply consequence of the viewing angle.To reduce this effect and focus on more fundamental parameters, we pre-select galaxies in terms of their inclination such that its importance was as small as possible whilst maintaining a large enough sample of galaxies to which we can apply our statistical tools to confidently.We investigated this by cutting the inclination to be less than 60 • , 45 • , and 30 • , with 90 • being edge-on and 0 • being faceon.The inclination importance dropped significantly, and we found a sample with inclination less than 45 • had inclination with negligible importance (as quantified further below) whilst maintaining a large sample.This cut the sample from 65 613 to 21 488 galaxies, whilst maintaining all other signal-to-noise selection criteria discussed in Section 2.1.6 .
Using the sample of local galaxies with inclination i < 45 • we recalculated the importance of each galactic parameter in determining the Balmer decrement.These importances and their errors are shown as the green bars with stars in Fig. 1 .The stellar mass is still the most important parameter, no w follo wed by the velocity dispersion and then by the metallicity.The SFR has very little importance, barely abo v e the random variable.The inclination is now as important as the random variable, R .Therefore, this selection on the inclination was adopted for this work, and henceforth all local galaxies analysed will have inclinations i < 45 • unless specified otherwise, to control for its effect on the Balmer decrement.
2D histogram visualization and PCC arrows
To better visualize the relative importance of the parameters, in Fig. 2, we plot the local galaxies used in this work (with inclination i < 45 • ) in a hexagonal 2D binning scheme, since hexagons allow for better data aggregation around the bin centre than rectangular bins.Since the stellar mass is found by the RF to be the most important parameter, we keep the x -axis as stellar mass, and vary the y -axis between the SFR SED , metallicity and the nebular velocity dispersion.The dependent variable (i.e. the one for which we want to find the dependence on the other quantities), on the z -axis (colourcoded), is al w ays the Balmer decrement.The galaxies were binned in hexagonal bins, and the median Balmer decrement of the galaxies in each bin was calculated.Bins with less than 25 galaxies were ignored.The contours show the density of the galaxies in this space, with the outermost contour containing 95 per cent of the galaxies in the sample.The contours in Fig. 2 (c) do not connect due to the sharp cut in the velocity dispersion, producing a discontinuity in the density distribution.
The PCC arrows were calculated using the binned galaxy parameters rather than the individual galaxies, to a v oid the analysis being dominated by the inner, most populated regions.Here, the PCC-deri ved arro ws indicate the direction in which the Balmer decrement has the largest average gradient on the 3D surface of each diagram, with its angle defined clockwise from the positive y -axis.The error on the angles was calculated through bootstrap random sampling.
The colour-shading and gradient arrows visually illustrate how the Balmer decrement depends on all of these parameters with varying strength.Considering the angles of the arrows on each of the plots, the Balmer decrement has a strong correlation with the stellar mass.The colour shading and PCC arrow in panel (a) visually confirms that, at fixed stellar mass, there is essentially no dependence of the Balmer decrement on SFR.Panels (b) and (c) visually show that, at a fixed stellar mass, the Balmer decrement also depend significantly both on the metallicity and on the velocity dispersion.The inclination of the PCC arrow in (b) and (c) being close to 45 • would naively indicate that the dependence on metallicity and velocity dispersion is stronger than inferred from the RF, and nearly as strong as the dependence on the Stellar Mass.Ho we ver, one has to take into account that these 2D histograms only consider the Balmer Decrement dependence on only two quantities at a time, therefore any residual dependence not associated with the quantities on the plot must be taken by one of them.Therefore, it is likely that in plot (b) the metallicity it also picking the Balmer decrement dependence on the velocity dispersion, while vice versa in plot (c) the velocity dispersion is picking the Balmer decrement dependence on the metallicity, if metallicity and velocity dispersion are correlated with each other.
Partial correlation coefficients
In this section, we further investigate the importance of the various parameters identified in the previous sections by using the PCC analysis on all parameters whilst keeping the two most important parameters constant.With respect to the RF analysis, the (full) PCC additionally tells us the direction (sign) of the dependence.Again, we apply this analysis to both samples of local galaxies with and without a selection on their inclination in order to explore its effect.
To determine the PCCs for this sample of local galaxies with no cut on their inclination, we kept the two most important parameters constant.The PCCs between the Balmer decrement and the galaxy parameters deemed important in this analysis (stellar mass, metallicity, velocity dispersion, inclination, and the SFR calculated using SED fitting) are shown in Fig. 3 .Similar analysis including the SFR derived from H α is shown in Appendix A , supporting the conclusion that the relation between the Balmer decrement and SFR H α is driven mostly by the fact the Balmer decrement was used to dust correct the H α flux in SFR H α , as well as H α also appearing in the Balmer decrement.The green bars with stars show the PCCs using galaxies with inclination i < 45 • , and the blue bars with circles represent the PCCs using the galaxies with no selection on their inclination.
For the sample with no selection on its inclination, the strongest correlation with the Balmer decrement is with the stellar mass and then the inclination, which is consistent with the RF results shown in Fig. 1 .The PCCs for the galaxies with inclination i < 45 • show that the stellar mass is still the most strongly correlated parameter with the Balmer decrement, now followed by the metallicity and the velocity dispersion, with the inclination being much less correlated compared with the sample with no selection on its inclination.Hence, these results also support the selection on the inclination of the local galaxies, allowing the effects of the inclination on the Balmer decrement to be controlled for.The exact values of the PCC and the RF importances for each galaxy parameter may vary between Figure 2. SFR (calculated using SED fitting), metallicity and velocity dispersion (normalized by 100 km s -1 ) as a function of stellar mass, colour-coded by the Balmer decrement (i.e.2D histograms in which the Balmer decrement is the dependent variable), for local galaxies (SDSS).The grey arrows denote the direction in which the Balmer decrement has the largest gradient, determined using the PCC coefficients, with its angle defined clockwise from the positive y -axis.The colour gradients and arrows clearly indicate a strong dependence on stellar mass and, at a given stellar mass, also a strong dependence on both metallicity and velocity dispersion, but little or no dependence on SFR.The black contours indicate the density of the galaxies in each diagram, with the outermost contour containing 95 per cent of the galaxies.The contours in (d) do not join due to the sharp cut in σ H α .the methods and depend on which sample is being analysed.This is due to the PCC analysis only controlling for the two most important parameters in each sample and so the exact value of the PCCs will still be affected by secondary correlations (which are therefore picked by the two most important parameters), which instead the RF analysis is able to take into account.Hence, minor differences in the RF importances and PCC values for the galaxy parameters are to be expected.
We additionally see that the PCC between the Balmer decrement and all parameters except the SFR calculated from SED fitting are positive for both samples, implying a positive correlation between the Balmer decrement and those parameters.The PCC between the Balmer decrement and the SFR calculated from SED fitting, ho we ver, is ne gativ e for the sample with no selection on its inclination, and almost zero for the sample with inclination i < 45 • , implying any correlation is driven by the inclination, or some other parameter which cross correlates them, and this effect is reduced when the inclination is controlled for.
Establishing the analytical dependence of the Balmer decrement on galaxy properties
Combining the results from the RF and PCC analysis, we see that the stellar mass is by far the most important parameter in determining the Balmer decrement.Both the metallicity and the velocity dispersion have significant importance; ho we ver, the two analysis methods do not agree on the order of their importance, with RF analysis ranking the velocity dispersion slightly higher than the metallicity, and the PCC analysis ranking the metallicity and the velocity dispersion at similar levels.In this section, we investigate the analytical dependence of the Balmer decrement on these important galaxy properties.
To quantitatively investigate how the Balmer decrement depends on these galactic parameters, we created track plots of the galaxies, with the stellar mass on the x -axis, Balmer decrement on the y -axis, and the tracks binned in SFR SED , metallicity, and nebular velocity dispersion.The tracks represent the stellar mass versus Balmer Figure 3. PCC of the Balmer decrement with the different galaxy parameters for local galaxies (SDSS).Green bars with stars show the PCC values for galaxies with inclination i < 45 • , and blue bars with circles show the PCC values for galaxies with all inclinations.The stellar mass is most strongly and intrinsically correlated with the Balmer decrement, in agreement with the RF results, followed by the inclination for the sample of galaxies with no selection on their inclination.Selecting for the inclination i < 45 • greatly reduces the PCC value of the Balmer decrement with inclination, and in this case the second strongest Balmer decrement correlation is with metallicity and velocity dispersion, while the correlation with SFR SED (SFR calculated via SED fitting), becomes insignificant.decrement relation in constant bins of the third variable.We chose the stellar mass to be on the x -axis since it showed the strongest correlation with the Balmer decrement from analysis in the previous sections.
These track plots are shown in Fig. 4 , which illustrates that, while there is al w ays a strong Balmer decrement dependence on the stellar mass, at a fixed stellar mass, the Balmer decrement is dependent most on metallicity and least on SFR.The strength of the Balmer decrement v ersus metallicity, v ersus SFR, and versus velocity dispersion dependencies are stellar mass-dependent themselves.The dependence on velocity dispersion is strongest at higher mass, and this trend is inverted for metallicity, with the dependence largest at low masses.There is negligible dependence of the Balmer decrement on the SFR at all masses investigated here, and so is not considered further in the analysis below.
The interdependencies of these galactic parameters are clearly sho wn here.Follo wing the methodology proposed in Mannucci et al. ( 2010 ), we attempted to reduce the dimensionality of the problem by rotating the stellar mass, metallicity and velocity dispersion parameters space such that this projection minimizes the dispersion in the Balmer decrement.This projection reduces the dependence of the Balmer decrement to only one parameter, which we defined as the reduced mass μ, where M is in units of solar mass, and σ 100 , is the velocity dispersion (measured from H α ) normalized by 100 km s -1 .This definition and normalization is maintained throughout the rest of this paper.Additionally, α and δ are parameters that should be determined to minimize the dispersion in the Balmer decrement.The minimization method used is discussed further in Appendix E , where the determined values of α and δ which minimize the dispersion in the Balmer decrement are 3.67 and 2.96, respectively.Using these minimization parameters, we recreated Fig. 4 but replacing the stellar mass with the reduced mass μ at minimum dispersion, as shown in Fig. 5 .The tracks in both Fig. 5 (a) and (b) are much less spread in Balmer decrement, at a given value of the reduced mass, compared to the tracks in Fig. 4 .This shows the dependence of the Balmer decrement on the metallicity and velocity dispersion is being greatly reduced, indicating that our minimization analysis reduced the dimensionality of our problem so that the Balmer decrement depends on one parameter, μ.
To further illustrate the effect of using the reduced mass o v er the stellar mass, we ran the RF regression on the galaxies with inclination i < 45 • , whilst including the reduced mass μ in the analysis as an extra parameter to test whether μ is now the most important parameter compared to the other global galacatic parameters.We also included the un-minimized parameter μ 0 = log M + [O/H] + log σ 100 to test whether the importance of μ is simply due to the RF picking up on the linear combination of the other parameters, or whether the minimization has had an effect.The importances of each of the parameters are shown in Fig. 6 , showing that the importance of μ in determining the Balmer decrement is dominant, with all other parameters including μ 0 having next to no importance relatively.Hence, the reduced mass encapsulates the majority of the importance of all the other galactic parameters considered in this work.
To show how well this minimization w ork ed on the galaxies themselves, we plot the galaxy contours with the Balmer decrement on the y -axis against (a) the stellar mass and (b) the reduced mass, shown in Fig. 7 .In both plots, the mean and error on the mean of the Balmer decrement are shown in blue, and the median and the 84-16th percentile range are shown in green, each calculated in bins 0.15 dex wide of the x -axis parameter.It can be seen that the dispersion or 84-16th percentile range of the Balmer decrement of the galaxies is reduced when moving from having the stellar mass on the x -axis in (a) to the reduced mass on the x -axis in (b).The unweighted average 84-16th percentile range across all the bins in the x -parameter was calculated for each plot, and shown as the red error bar on the plots, with (a) having 0.906 and (b) having 0.849.This shows the effect of the minimization, reducing the percentile range by 6.3 per cent.This result indicates that part of the dispersion in the Balmer decrement versus stellar mass diagram is not intrinsic, but a consequence of the secondary dependences on metallicity and velocity dispersion.Once these dependences are taken into account by introducing μ, then the scatter is reduced.The residual scatter is likely due to di verse e volutionary processes within the galaxies, although it may also partly be due to observational errors.The contribution of the observational errors to the o v erall scatter we see was estimated by taking the median error on the measurement of the Balmer decrement across the sample used in this work, with the observational error calculated to be 0.32.The reduction in the percentile range is small, although this is not surpizing since the stellar mass accounted for the majority of the variation in the Balmer decrement, so accounting for the less important (but still important) parameters would have a small, non-negligible effect.
In order to provide a functional form of the Balmer decrement versus stellar mass and versus.reduced mass dependence, we fit a third-order polynomial to the mean of the Balmer decrement in each of those parameter spaces, providing the following fits: The fact that, at a fixed μ, there is little/no dependence on the Balmer decrement on either metallicity or velocity dispersion indicates that the reduced mass has captured well these secondary dependences.In particular, compared to the track plots in Fig. 4 , the dependence of the Balmer decrement on the colour-coded parameters is considerably reduced.The contours in black display the density of galaxies in these diagrams, with the outermost contour containing 95 per cent of the galaxies.and the resulting fits are shown in Fig. 7 as the orange lines.
In order to further demonstrate that the reduced mass μ encapsulates all the variation of the Balmer decrement due to indirect correlations with the other galactic parameters (metalliicty and velocity dispersion), we recreate Fig. 2 by plotting both the metallicity and velocity dispersion now as functions of the reduced mass μ, colour-coded by the Balmer decrement, as is shown in Fig. 8 .When replacing the stellar mass with the reduced mass μ, the dependence of the Balmer decrement on both the metallicity and the velocity dispersion (whilst controlling for the reduced mass) greatly reduces, as it can be seen by eye or by considering the arrow angles, which each rotate to within about 20 • or horizontal (90 • ).Hence, using this projected parameter space encapsulates the dependence of the Balmer decrement on a single parameter.
Summary
We have identified the most important parameters in determining the Balmer decrement through RF and PCC analysis, finding the stellar mass to be the most important, followed by metallicity and nebular velocity dispersion (once the dependence on the inclination, i , is remo v ed by selecting galaxies with i < 45 • ).
The strong dependence on stellar mass is in line with the expectation from equation ( 4), where this dependence primarily comes from the MGMS.The dependence on metallicity is also in line with equation ( 4), where this dependence comes from the relationship between the dust-to-gas ratio and the metallicity, since DGR = DZR * Z g .
The additional dependence on the nebular velocity dispersion was not expected.This may be due to how the nebular velocity dispersion traces the gravitational potential in the galaxy, which is the capability of the galaxy to retain dust against the strong radiation pressure on the dust and to retain metals against metal loss via winds and gas outflows (Chisholm et al. 2015 ).
Additionally, our analysis has determined the SFR derived from H α is solely important due to the H α flux used in calculating the SFR being dust corrected itself, and the SFR calculated using SED fitting is a more valid tracer of the SFR in this work.This tracer of the SFR is shown to be not important in determining the Balmer decrement when compared to the stellar mass, metallicity and velocity dispersion.
By combining these important parameters into the reduced mass μ, we have been able to collapse the majority of the dependence of the Balmer decrement to this parameter.This will allow for much easier comparison with other samples of galaxies in the next section.
Quantitatively, the dependence on stellar mass and metallicity is in the right direction, ho we ver, does not match exactly with the expectations from the simple predictions in equation ( 4 ).To better predict these observations, more advanced modelling would be required, including a comparison with numerical simulations and potentially considering a geometrical factor dependent on mass, which is currently assumed constant.Zuckerman et al. ( 2021 ) argue that the dust attenuation is related to the thickness of the galaxy, and since a galaxy with higher stellar mass will have a greater thickness, it is likely that the disc thickness contributes some of the correlation between stellar mass and dust attenuation we observe.
Comparison of samples at high redshifts
Although the focus of this paper is primarily to investigate the scaling relations between dust attenuation (traced by the Balmer decrement) and galaxy properties in the local universe, it is interesting to explore also whether such relations hold at high redshift.Samples at high redshift have much lower statistics and higher uncertainties; hence, the level of analysis performed in this paper on the local sample is certainly not possible on high redshift samples, at least not yet.Ho we ver, we can explore whether their properties are consistent with the local findings.
Specifically, we investigate whether the scaling relations given by equations ( 8) and ( 9) that we have found in the local Universe between dust attenuation and other global galactic properties hold at high redshift.We do this by comparing the observed values of the Balmer decrement from the galaxies in the KLEVER and MOSDEF surv e ys ( z ∼ 1 −3) with the galaxies in the SDSS surv e y in stellar mass space and in reduced mass space, which should encapsulate all of the parameters important in determining the Balmer decrement.As mentioned pre viously, ne w samples at even higher redshift from NIRSpec-JWST surv e ys do not yet have the information required to perform these tests.
Plots comparing the local and higher redshift galaxies are shown in Fig. 9 , where the Balmer decrement is plotted against stellar mass and also against reduced mass for the galaxies in the KLEVER surv e y (a) and (b), and the galaxies in the MOSDEF surv e y (c) and (d).Here the mean and error on the mean were calculated in order to focus on the primary dependences.For both the galaxies in KLEVER and MOSDEF, this shows the Balmer decrements to o v erlap with those of the local galaxies in both the stellar mass and reduced mass space.These findings are consistent with no redshift evolution of these relations.For the simple dependence on mass, this finding agrees with the results from Shapley et al. ( 2022 ) for the
2D histograms in which the
Balmer decrement is the dependent variable), for local galaxies (SDSS).The grey arrows denote the direction in which the Balmer decrement has the largest gradient, determined using the PCC coefficients.The colour gradients and arrows clearly indicate an even stronger dependence of the Balmer decrement on reduced stellar mass μ, relative to the dependence on the stellar mass seen in Fig. 2 , while the dependence of the Balmer decrement on metallicity and velocity dispersion is now greatly reduced with respect to Fig. 2 (the residual dependence is due to the fact that these diagrams explore only the dependence of two quantities at each time, hence they pick the residual dependence on all other quantities).The black contours indicate the density of the galaxies in each diagram, with the outermost contour containing 95 per cent of the galaxies.The contours in (b) do not join due to the sharp cut in σ H α .galaxies in MOSDEF, and with the results from Shapley et al. ( 2023 ) at even higher redshifts.
The Balmer decrement versus stellar mass relationship for the galaxies in the MOSDEF surv e y seems to slightly flatten compared to galaxies in the SDSS surv e y, which can be explained by the fact the MOSDEF surv e y is complete for stellar masses less than 10 10 . 5M , ho we ver, incomplete for dusty red SFGs with stellar mass abo v e 10 10 . 5M .Hence, the dustiest massive galaxies might be missed, causing the Balmer decrement versus stellar mass relationship to flatten at high stellar masses.
This lack of evolution in the stellar mass space could be due to a truly redshift invariant Balmer decrement versus stellar mass relationship.As suggested by Shapley et al. ( 2022 ), a non-evolving relation can arise due to offsetting effects from the simultaneous evolution of gas mass surface density, DGR, metallicity, dust geometry, and/or dust mass absorption coefficients.Our finding of no evolution MNRAS 527, 8213-8233 (2024) Figure 9. Balmer decrement as function of stellar mass (left panel) and reduced mass (right panel) comparing local galaxies (SDSS) with galaxies at high redshift ( z ∼ 1-3) from the KLEVER surv e y (top panel) and the MOSDEF surv e y (bottom panel).The distribution of local galaxies in SDSS is shown by the black contours, where the outermost contour contains 95 per cent of the galaxies, and the blue line is the mean Balmer decrement in each 0.15 dex wide bin in stellar mass or reduced mass with at least 25 galaxies present.Shaded blue regions represent the error on the mean in each bin.Highz galaxies in the KLEVER and MOSDEF surv e ys are shown in orange.The purple se gments show the means and the purple-shaded re gions the errors on the mean for each 0.5 de x wide bin in stellar mass and reduced mass.The green error bars represent the average 16-84th percentile ranges and the red error bars represent the median error in the Balmer decrement measurement for the highz galaxies.Panels (a) and (b) show that there is no significant evolution between the Balmer decrement of the galaxies from KLEVER and from the local galaxies both in stellar mass and reduced mass space.Similarly, panels (c) and (d) show that there is no significant evolution between the Balmer decrement of the galaxies from MOSDEF and the local SDSS galaxies in both stellar mass and reduced mass space.These results indicate that there is no evolution of the relation between the Balmer decrement versus stellar mass and of the Balmer decrement versus reduced mass up to a redshift of z ∼ 1 −3.The slight flattening of the Balmer decrement stellar mass relation at high masses ( M > 10 10.5 M ) might be due to the MOSDEF surv e y possibly missing a portion of the massive dusty red SFGs.even in the Balmer decrement relation with reduced mass μ, makes the explanation of a combination of dif ferent e volutionary ef fects cancelling each other unlikely.Our findings are more supportive of a scenario in which the dust production mechanism and associated distribution in galaxies do not change with cosmic time up to z ∼ 2 −3, i.e. the multidimensional relationship between dust attenuation and the galactic quantities does not change with cosmic epoch, galaxies simply populate different regions of this multidimensional surface at different cosmicepochs.
The inclination of the z ∼ 1 −3 galaxies was not controlled for in this work.Lorenz et al. ( 2023 ) show that for their sample of galaxies taken from the MOSDEF surv e y (1.37 ≤ z ≤ 2.61), there is no dependence of the dust attenuation on inclination.They propose a dust model where attenuation occurs in three components: star-forming regions, large dusty star-forming clumps, and a small contribution from the ISM.Since the majority of the attenuation occurs in the roughly spherical star-forming regions and large dusty star-forming clumps, there would be no dependence of the attenuation on the inclination of the galaxy.This three-component model is similar to the widely used two-component model for local galaxies ( z ∼ 0), consisting of star-forming regions with optically thick dust primarily around young stars and then the diffuse ISM component (Charlot & Fall 2000 ).The addition of the large dusty star-forming clumps in the three-component model is supported by observations of ( z ∼ 2) galaxies from, for example, Schreiber et al. ( 2011 ) and Wuyts et al. ( 2012 ).This model also supports the dust attenuation versus stellar mass relationship, since these large dusty star-forming clumps have been observed to be both larger Swinbank et al. (2012) andmore common (Tadaki et al. 2013 ) as stellar mass increases, and a larger dusty star-forming clump will lead to increased dust attenuation due to extended path-lengths for the light.
The analysis in this work suggests that the dust attenuation of our local sample of galaxies (SDSS) is dependent on the inclination of the galaxies, and this result can be supported by the two-component dust model.Our statistical analysis of the galaxy parameters is picking up on the dependence between dust attenuation and the inclination of the galaxy, likely caused by the attenuation originating in the diffuse ISM.
We conclude, ho we ver, by warning that the comparison with highredshift galaxies is still plagued by the large dispersion and poor statistics, which makes even the errors on the means still relatively large (as highlighted in Fig. 9 ), and a more thorough exploration requires much larger samples, which may become available with the next-generation near-IR MOS spectrographs (Maiolino et al. 2020 ).
C O N C L U S I O N
In this work, we have investigated which galactic parameters are most important in determining the dust attenuation in galaxies, as traced by the Balmer decrement, and exploring how this varied at different cosmic epochs by comparing local galaxies (SDSS) with samples at z ∼ 1 −3 (KLEVER and MOSDEF).
We summarize our results as follows: (i) PCC and RF analysis on local (SDSS) galaxies show that the stellar mass is the most important parameter in determining the dust attenuation traced by the Balmer decrement.Metallicity and nebular velocity dispersion are also important but less so than the stellar mass.
(ii) Galaxy inclination has obviously an important effect on the observed attenuation.Ho we ver, its ef fect on these results was controlled for by selecting galaxies with inclination i < 45 • ; with this selection, the Balmer decrement had negligible dependence on the inclination using PCC and RF analysis.
(iii) The dependence of the Balmer decrement on SFR traced by H α is driven by the fact that H α is also included in the Balmer decrement, and by the fact that the Balmer decrement being used to dust correct the H α flux; hence, the correlation between Balmer decrement and SFR inferred from H α is spurious.No dependence of the Balmer decrement on SFR is found if the latter is inferred using SED fitting.
(iv) The dispersion of the Balmer decrement in the rotated parameter space defined by the reduced mass, μ = log M + 3.67[O/H] + 2.96log σ 100 , is reduced compared to the dispersion in stellar mass space.This indicates that the variation in the Balmer decrement due to the metallicity and velocity dispersion are captured by this reduced mass.
(v) The dependence of the Balmer decrement on the stellar mass is expected from the MGMS relation (M H 2 versus M * ).The dependence on metallicity is also expected from the dust-to-gas ratio.The dependence on velocity dispersion was not expected and may trace the capability of more massive systems (traced by the higher velocity dispersion) to better retain dusty clouds against radiation-driven pressure outflows.
(vi) We observe no significant evolution of the relationship between the Balmer decrement and stellar mass relationship up to z ∼ 1 −3.Hence, the dust attenuation versus stellar mass relationship does not evolve up to this redshift.We additionally see no significant evolution of the relationship between the Balmer decrement and the reduced mass, μ, indicating that the scaling relations found locally capture the dust attenuation properties also of distant galaxies.This work can be greatly expanded at high redshift with the next generation, large multiplexing near-IR spectrographs (e.g.MOONS: Cirasuolo et al. 2020 ;Maiolino et al. 2020 ), which will provide spectra for several hundred thousands galaxies, i.e. with statistics similar to the SDSS, around cosmic noon (z ∼1-3).
This work can also be extended to higher redshifts using data from JWST's NIRSpec surv e ys and NIRCam slitless mode.This exploration has already started for what concerns the dependence of the Balmer decrement as a function of stellar mass (Shapley et al. 2023 ), but can be expanded further to also investigate the relation with the reduced mass.JWST spectroscopic surv e ys are e xpected to detect thousands of galaxies out to z ∼ 7, for which both H α and H β will be available.Hence, it will be possible to investigate the Balmer decrement across a very large range of redshifts and track the evolution of the dust attenuation versus reduced mass relation.
AC K N OW L E D G E M E N T S
GM and RM acknowledge support by the Science and Technology Facilities Council (STFC), by the ERC through Advanced Grant 695 671 'QUENCH' and by the UKRI Frontier Research grant RISEandFALL.RM also acknowledges funding from a research professorship from the Royal Society.
MNRAS 527, 8213-8233 (2024) stronger correlation between SFR H α and the Balmer decrement than between the SFR derived from SED fitting and the Balmer decrement shown in Fig. 2 .
Additionally, the results in Sections 6.1.1 and 6.1.3are repeated whilst including the SFR derived from the H α emission-line flux.These are shown in Fig. A2 , with the RF importances in determining the Balmer decrement for each parameter shown on the left (a), and the PCC with the Balmer decrement for each parameter shown on the right (b).In each plot, the blue bars with circles indicate the results using the sample of local galaxies with no selection on their inclination, and the green bars with stars represent the results using the sample of local galaxies with inclination i < 45 • .Again, both these results show that the relationship between the Balmer decrement and SFR H α is now much stronger than the relationship with SFR SED .
These results indicate that this strong relationship between SFR H α and Balmer decrement is a spurious artefact of the fact that H α is a quantity also included in both quantities; moreo v er, SFR H α is corrected for dust attenuation using the Balmer decrement itself, hence introducing another spurious correlation.
We also show how the SFR derived from SED fitting relates to the SFR derived from the H α emission-line flux in Fig. A3 , where the straight line is fit using Orthogonal Distance Regression (ODR), estimating the slope to be 0.996 ± 0.038, with the scatter being 0.132 dex, measured as the square root of the residuals in the fit.The 4000-Å break is defined as the ratio of flux either side of the break in rest-frame galaxy spectra observed at 4000 Å, and acts as a tracer of the young and old stars in a galaxy spectrum.
In this work, we follow the narrow 4000-Å break (D4000) definition from Balogh et al. ( 1999 ) where f λ is the spectral flux density of the galaxy in Å.
Figure B1
. Plot of contours of galaxies in the SDSS surv e y in D4000 and sSFR space, with median sSFR o v erplotted in blue and the best-fitting 2D polynomial in red.The median sSFR was determined using 0.15 dex wide bins in D4000 which had at least 25 galaxies in.This calibration will be used to determine the SFR of galaxies in the SDSS surv e y without directly using H α .
Figure B2.
Relative importance of the different galactic parameters in determining the Balmer decrement for local galaxies (SDSS) calculated using RF. Green bars with stars show importances for galaxies with inclination i < 45 • , while blue bars with circles show the importance for galaxies with all inclinations.Stellar mass is the most important parameter in both samples.
Selecting for the inclination i < 45 • reduces the importance of the inclination to be negligible.SFR D4 is the SFR derived from D4000.The parameter R is a control, random variable.
Figure B3
. PCC of the Balmer decrement with the different galaxy parameters for local galaxies (SDSS).Green bars with stars show the PCC values for galaxies with inclination i < 45 • , and blue bars with circles show the PCC values for galaxies with all inclinations.The stellar mass is most strongly and intrinsically correlated with the Balmer decrement, in agreement with the RF results, followed by the inclination for the sample of galaxies with no selection on their inclination.Selecting for the inclination i < 45 • greatly reduces the PCC value of the Balmer decrement with inclination, and in this case the second strongest Balmer decrement correlation is with metallicity and velocity dispersion, while the correlation with SFR D4 (SFR derived from D4000), becomes insignificant.
This empirical correlation is used to determine the SFR for galaxies where measurements on H α are not available, as is done in Brinchmann et al. ( 2004 ) and Bluck et al. ( 2020a ).
In this work, the method proposed by Bluck et al. ( 2020a) is adopted wherein a calibration is calculated between D4000 and the sSFR, taking the median sSFR in each 0.15 dex wide bin of D4000 with at least 25 galaxies present.This calibration is shown in Fig. B1 .
Additionally, the SFR calculated using D4000 was used as the tracer of SFR in the statistical analysis of the local galaxies as is done in Section 5 , where instead the SFR from SED fitting was used.Analysis using RF and PCCs to determine the importance of MNRAS 527, 8213-8233 (2024) the different parameters in determining the Balmer decrement are shown in Figs B2 and B3 , respectively.Both figures show similar results to those shown in Figs 1 and 3 in the main text, where the SFR is calculated instead using SED fitting.
The result of using the SFR from D4000 instead of the SFR from SED fitting implies the same conclusion, that the dependence of the Balmer decrement on the SFR traced by the H α flux is spuriously high due to the H α flux being itself dust corrected using the Balmer decrement, as well as H α being present in the Balmer decrement itself.
A P P E N D I X C : R F T U N I N G
We train an RF regressor to each of the local galaxy samples, first with no selection on their inclination and second with inclination i < 45 • , to determine which parameters are most important in determining the Balmer decrement.First, we tuned the minimum number of samples on the final leaf hyperparameter.To do this, we varied this hyperparameter and trained an RF regressor with each hyperparameter, then determined the mean absolute error (MAE) and mean-squared error (MSE) between the input and predicted Balmer decrements for both the train and then the test sample.The MAE and MSE are metrics of the prediction accuracy of the regressor, i.e. how well the regressor predicts the feature parameter.The optimal hyperparameter is when the difference between the train and test MAE is below 2 per cent, implying no o v erfitting.Ov erfitting occurs when the RF is fit too well to the training data that it can not adapt to new test data, leading to a large difference between the prediction accuracy for the train and test samples.
This method was repeated five separate times on each sample, and the averaged MAE at each value of the hyperparameter is shown in Fig. C1 MNRAS 527, 8213-8233 (2024) set.Since we can see no significant difference between the train and test sample for both samples, the two regressors are not overfit.Using the determined best hyperparameter value for each regressor, we ran the RF 100 times on their respective galaxy sample and averaged the importances for each parameter, and took the standard deviation of the importances to represent their error.
A P P E N D I X D : S T E L L A R V E L O C I T Y D I S P E R S I O N
The dependence of the Balmer decrement on the stellar velocity dispersion was additionally investigated in this work for the local galaxies.
To use the stellar velocity dispersion ( σ ), we first explored how it related to the nebular velocity dispersion traced by H α emission ( σ H α ) by plotting the nebular velocity dispersion as a function of stellar velocity dispersion, with the Balmer decrement on the z -axis.This plot is shown in Fig. D1 , using the local galaxies from the SDSS surv e y cut such that their inclination is less than 45 • .The angle of the PCC arrow (44 . • 31) implies the dependence of the Balmer decrement on the two different velocity dispersions is almost equal.Ho we ver, the galaxies are offset from the grey line representing the 1:1 relation between the two velocity dispersions, indicating that the two velocity dispersion are not completely interchangeable in this analysis.As is shown for z > 1 galaxies ( Übler et al. 2022 ) and for local galaxies (Crespo G ómez et al. 2021 ), the stellar velocity dispersion is, on average, twice the nebular velocity dispersion.
To further understand if we could use the stellar velocity dispersion in our analysis of local galaxies instead of the nebular velocity dispersion, we ran RF regression on our sample of galaxies with The grey arrows denote the direction in which the Balmer decrement has the largest gradient, determined using the PCC coefficients, with its angle defined clockwise from the positive y -axis.The arrow angle indicates the dependence of the Balmer decrement on the two velocity dispersions is almost equal.The straight gray line has a gradient of unity to guide the eye.The galaxies are slightly biased to lower nebular velocity dispersion and higher stellar velocity dispersion, as expected from the literature (Crespo G ómez et al. 2021 ;Übler et al. 2022 ).The offset of the two velocity dispersion indicate they are not exactly interchangable.The black contours indicate the density of the galaxies in this parameter space, with the outermost contour containing 95 per cent of the galaxies.
Figure D2.
Plot showing the importances of the different galactic parameters in determining the Balmer decrement along with their errors for local galaxies (SDSS), considering σ instead of σ H α .There is minimal difference between the importances determined here and those presented in Fig. 1 , showing that each velocity dispersion is roughly as important as the other in determining the Balmer decrement.The difference between the average MAE and MSE of the train and test samples being so small implies no o v erfitting.P arameter R is the random variable and SFR SED is the SFR derived from SED fitting.inclination less than 45 • as is done in the main text, ho we ver, replacing σ H α with σ .The RF regressor was tuned identically to how the regressors were tuned in the main text, as is discussed in Appendix C .The importances for each of the parameters is shown in Fig. D2 .Comparing with Fig. 1 , we can see the order of the importances is basically unchanged, indicating neither of the two velocity dispersions is more important than the other in determining the Balmer decrement.
Ho we ver, since stellar velocity dispersion measurements were not available for the galaxies at z ∼ 1 −3 used in this work, we decided to instead use the nebular velocity dispersion of all galaxies considered.
A P P E N D I X E : D I S P E R S I O N M I N I M I Z AT I O N
For a given α and δ, the dispersion in the Balmer decrement of the local galaxies was determined by binning the galaxies in 0.15 dex bins of μ and taking the standard deviation of the Balmer decrement of the galaxies in each bin, if there were at least 25 galaxies in the bin.The total dispersion at the given α and δ was then the weighted average of those standard deviations, weighted by the number of galaxies in each bin of μ.The latter requirement mitigated the effect of sparsely populated bins reducing the dispersion as the galaxies spread out in the parameter space at large rotations, creating artificially low dispersion at large rotations.
The approach taken in this work to identify the best fit in α and δ was iterative.We started by creating a grid in α, calculated the dispersion at each grid point with δ = 0 and identified the minimum.We then made a grid in δ and calculated the dispersion with the value of α at the minimum.This process was iterated until the minimum parameter values con verged.Con vergence was reached when the subsequent value of α and δ were equal within 16 dp to the preceding value.For each parameter, the grid had 200 points between -10 and 10.After three iterations, the minimization converged, and these final minimization curves for α and δ are shown in Fig. E1 .The final values are α = 3.67 and δ = 2.96.
Figure 1 .
Figure1.Relative importance of the different galactic parameters in determining the Balmer decrement for local galaxies (SDSS) calculated using RFs.Green bars with stars show importances for galaxies with inclination i < 45 • , while blue bars with circles show the importance for galaxies with all inclinations.Stellar mass is the most important parameter in both samples.Selecting for the inclination i < 45 • reduces the importance of the inclination to be negligible.SFR SED is the SFR calculated using SED fitting.The parameter R is a control, random variable.
Figure 4 .
Figure 4. Balmer decrement as a function of stellar mass in bins of (a) SFR (calculated via SED fitting), (b) metallicity, and (c) velocity dispersion.These tracks confirm that, at a fixed stellar mass, the Balmer decrement depends on metallicity and velocity dispersion, but has negligible dependence on SFR.The contours in black display the density of galaxies in each diagram, with the outermost contour containing 95 per cent of the galaxies.
Figure 5 .
Figure 5. Balmer decrement versus reduced mass μ = log M * + α[O / H] + δ log σ 100 , where α = 3.67 and δ = 2.96, in bins of metallicity (left panel) and velocity dispersion (right panel).The fact that, at a fixed μ, there is little/no dependence on the Balmer decrement on either metallicity or velocity dispersion indicates that the reduced mass has captured well these secondary dependences.In particular, compared to the track plots in Fig.4, the dependence of the Balmer decrement on the colour-coded parameters is considerably reduced.The contours in black display the density of galaxies in these diagrams, with the outermost contour containing 95 per cent of the galaxies.
Figure 6 .
Figure 6.Importance of the various galactic parameters and the reduced mass μ in determining the Balmer decrement, for local galaxies (SDSS) with inclination i < 45 • , as inferred with the RF analysis.The reduced mass μ is now by far the most important parameter, reducing the importance of the stellar mass, metallicity and velocity dispersion to be negligible, meaning all importance of these three galactic parameters are well incorporated in the reduced mass μ for what concerns their role in determining the Balmer decrement.The un-minimized parameter μ 0 = log M + [O/H] + log σ 100 is also included to show the importance of the minimized μ is not simply due to the RF regressor picking up the linear combination of the other parameters.The difference between the average MAE and MSE of the train and test samples being so small implies no o v erfitting.P arameter R is the random variable and SFR SED is the SFR calculated using SED fitting.and H α / H β = ( −0 .027 ± 0 .005) μ 3 + (0 .848 ± 0 .160) μ 2 + ( −8 .385 ± 1 .608) μ + (29 .856 ± 5 .368) (9)
Figure 7 .
Figure 7. Contours showing the distribution of the Balmer decrement of local galaxies (SDSS) as a function of (a) stellar mass and (b) reduced mass.The red error bars represent the average 16-84th percentile range of the Balmer decrement.The blue line represents the mean Balmer decrement in bins of 0.15 dex in the x -axis quantity; the shaded blue region represents the error on the mean in each bin.The green lines represent the median Balmer decrement, and the green-shaded region the 84-16th percentile range in each bin.The orange line represents the third-order polynomial fit to the mean.The outermost contour contains 95 per cent of the galaxies.
Figure 8 .
Figure8.Metallicity and velocity dispersion as a function of reduced stellar mass μ, colour-coded by Balmer decrement (i.e.2D histograms in which the Balmer decrement is the dependent variable), for local galaxies (SDSS).The grey arrows denote the direction in which the Balmer decrement has the largest gradient, determined using the PCC coefficients.The colour gradients and arrows clearly indicate an even stronger dependence of the Balmer decrement on reduced stellar mass μ, relative to the dependence on the stellar mass seen in Fig.2, while the dependence of the Balmer decrement on metallicity and velocity dispersion is now greatly reduced with respect to Fig.2(the residual dependence is due to the fact that these diagrams explore only the dependence of two quantities at each time, hence they pick the residual dependence on all other quantities).The black contours indicate the density of the galaxies in each diagram, with the outermost contour containing 95 per cent of the galaxies.The contours in (b) do not join due to the sharp cut in σ H α .
Figure A1 .
Figure A1.SFR (estimated from the H α flux) as a function of stellar mass, colour-coded by Balmer decrement for local galaxies (SDSS).The grey arrow denotes the direction in which the Balmer decrement has the largest gradient, determined using the PCC coefficients, with its angle defined clockwise from the positive y -axis.The colour gradients and arrows clearly indicate a strong dependence of the Balmer decrement on both the stellar mass and, at a given stellar mass, also a strong dependence on SFR H α .Comparing to Fig.2, the correlation between the Balmer decrement and SFR H α is much larger than with the SFR derived from SED fitting.The black contours indicate the density of the galaxies in each diagram, with the outermost contour containing 95 per cent of the galaxies.
Figure A2 .
Figure A2.Plots showing (a) the importances of the different galactic parameters in determining the Balmer decrement and (b) the PCCs between the Balmer decrement and the different galactic parameters, along with their errors for local galaxies (SDSS), including the SFR derived from H α .Green bars with stars show results for galaxies with inclination i < 45 • , and blue bars with circles show results for galaxies with all inclinations.Discrepancy between results for SFR H α and the SFR derived from SEDs highlight how the stronger relationship between the Balmer decrement and SFR H α is due to the cross-correlation from the Balmer decrement being used to dust correct the H α line flux.Parameter R is the random variable and SFR SED is the SFR derived SED fitting.
Figure A3 .
Figure A3.SFR estimated from SED fitting as a function of the SFR estimated from the H α flux, colour-coded by Balmer decrement for local galaxies (SDSS).The red line represents the fit using ODR, showing a near one-to-one slope, and with scatter (taken as the square root of the residuals in the fit) of 0.132 dex.The black contours indicate the density of the galaxies in each diagram, with the outermost contour containing 95 per cent of the galaxies.
Figure C1.Plots showing the tuning performed on the local galaxies with no selection on their inclination, (a) and (b), and on the local galaxies with inclination i < 45 • , (c) and (d), used to produce the importances in Fig. 1 .Plots (a) and (c) show the minimum number of samples on the final leaf plotted against the average MAE o v er fiv e iterations for the train and the test sample.The optimal hyperparameter value is the minimum value at which the difference between the train and test MAE is less than 2 per cent.Plots (b) and (d) show the contours of the input Balmer decrement plotted against the predicted Balmer decrement for the train (orange) and test (blue) samples, with a 1:1 line in blue, showing the regressor is not o v erfitting for both samples due to a strong o v erlap of the two sets of contours.The outermost contour of each sample contains 95 per cent of the galaxies.
Figure D1 .
Figure D1.Nebular velocity dispersion as a function of the stellar velocity dispersion, colour-coded by the Balmer decrement for local galaxies (SDSS).The grey arrows denote the direction in which the Balmer decrement has the largest gradient, determined using the PCC coefficients, with its angle defined clockwise from the positive y -axis.The arrow angle indicates the dependence of the Balmer decrement on the two velocity dispersions is almost equal.The straight gray line has a gradient of unity to guide the eye.The galaxies are slightly biased to lower nebular velocity dispersion and higher stellar velocity dispersion, as expected from the literature(Crespo G ómez et al. 2021 ;Übler et al. 2022 ).The offset of the two velocity dispersion indicate they are not exactly interchangable.The black contours indicate the density of the galaxies in this parameter space, with the outermost contour containing 95 per cent of the galaxies.
Figure E1 .
Figure E1.Plots of the dispersion in the Balmer decrement of the local galaxies (SDSS) against the minimization parameter α (a) and δ (b) for the final iteration of the minimization analysis.The minimum of the dispersion curve and the corresponding value of the minimization parameter is shown by the horizontal and vertical lines, respectively.The minimization parameters are defined through μ = log M + α[O/H] + δlog σ 100 as in equation ( 7 ).
Table 1 .
Cunha et al. 201215 )ions of the emission-line ratios used to determine the metallicity of the galaxies through strong line calibrations.[OII]λλ3727,29notationis equi v alent to writing [O II ] λ3727 + [O II ] λ3729.(especiallywith the Balmer decrement, which includes H α ), another tracer of the SFR was used in this work, the SFR derived from fitting SEDs to the combined SDSS and WISE photometry fromChang et al. ( 2015 ).They fit the combined photometry using the SED-fitting code { \ sc MAGPHYS } (daCunha et al. 2012) to obtain monochromatic mid-IR SFR tracers, which do not suffer the same spurious correlations present in the SFR traced by the H α flux. | 23,069.2 | 2023-05-31T00:00:00.000 | [
"Physics"
] |
Unlabeled Data for Morphological Generation With Character-Based Sequence-to-Sequence Models
We present a semi-supervised way of training a character-based encoder-decoder recurrent neural network for morphological reinflection—the task of generating one inflected wordform from another. This is achieved by using unlabeled tokens or random strings as training data for an autoencoding task, adapting a network for morphological reinflection, and performing multi-task training. We thus use limited labeled data more effectively, obtaining up to 9.92% improvement over state-of-the-art baselines for 8 different languages.
Introduction
Morphologically rich languages use inflectionthe adaptation of a surface form to its syntactic context-to mark the properties of a word, e.g., gender or number of nouns or tense of verbs. This drastically increases the type-token ratio, and thus negatively effects natural language processing (NLP), making morphological analysis and generation an important field of research.
In this work, we focus on morphological reinflection (MRI), the task of mapping one inflected form of a lemma to another, given the morphological properties of the target, e.g., (smiling, Past-Part) → smiled. The lemma does not have to be known. Recently, there have been some advances on the topic, motivated by the SIGMOR-PHON 2016 shared task on morphological reinflection (Cotterell et al., 2016) and the CoNLL-SIGMORPHON 2017 shared task on universal morphological reinflection (Cotterell et al., 2017). In 2016, neural sequence-to-sequence models, specifically attention-based encoder-decoder models, outperformed all other approaches by a wide margin (Faruqui et al., 2016;Kann and Schütze, 2016). However, those models require a lot of training data, while in contrast many morphologically rich languages are low-resource, and little work has been done so far on neural models for morphology in settings with limited training data. This makes sequence-to-sequence models not applicable to morphological generation in most languages.
An abundance of unlabeled data, in contrast, can be assumed available for each language in the focus of NLP. Thus, we propose a semisupervised training method for a state-of-the-art encoder-decoder network for MRI using both labeled and unlabeled data, mitigating the need for time-expensive annotations. We achieve this by treating unlabeled words as training examples for an autoencoding (Vincent et al., 2010) task and multi-task training (cf. Figure 1). We intuit the following reasons why this should be beneficial: (i) The decoder's character language model can be trained using unlabeled data. (ii) Training on a second task reduces the problem of overfitting. (iii) By forcing the model to additionally learn autoencoding, we give it a strong prior to copy the input string. This might be advantageous as often many forms of a paradigm share the same stem, e.g., smiling and smiled. In order to investigate the importance of the latter, we further experiment with autoencoding of random strings and find that for our experimental settings and non-templatic languages the performance gain is comparable to using corpus words.
Model Description
The log-likelihood for joint training on the tasks of MRI and autoencoding is: (1) T is the MRI training data, with each example consisting of a source form f s , a target form f t and a target tag t. W denotes a set of words in the language of the system. The encoding function e depends on θ. The parameters θ are shared across the two tasks, resulting in a share of information. We obtain this by giving our model data from both sets at the same time, and marking each example with a task-specific input symbol, cf. Figure 1.
Encoder. For the input of the encoder, we adapt the format by Kann and Schütze (2016), but modify it to be able to handle unlabeled data: Given the set of morphological subtags M each target tag is composed of (e.g., the tag 1SgPresInd contains the subtags 1, Sg, Pres and Ind), and the alphabet Σ of the language of application, our input is of the form B[A/M * ]Σ * E, i.e., it consists of either a sequence of subtags or the symbol A signaling that the input is not annotated and should be autoencoded, and (in both cases) the character sequence of the input word. B and E are start and end symbols. Each part of the input is represented by an embedding. We then encode the input x = x 1 , x 2 , . . . , x Tx using a bidirectional gated recurrent neural network (GRU) (Cho et al., 2014b) x i , with f being the update function of the hidden layer. Forward and backward hidden states are concatenated to obtain the input h i for the decoder.
Decoder. The decoder is an attention-based GRU, defining a probability distribution over strings in Σ * : with s t being the decoder hidden state for time t and c t being a context vector, calculated using the encoder hidden states together with attention weights. A detailed description of the model can be found in Bahdanau et al. (2015).
Experiments
Dataset. We experiment on the task 3 dataset of the SIGMORPHON 2016 shared task on MRI (Cotterell et al., 2016) and all standard languages provided: Arabic, Finnish, Georgian, German, Navajo, Russian, Spanish and Turkish. German, Spanish and Russian are suffixing and exhibit stem changes. Russian differs from the other two in that those stem changes are consonantal and not vocalic. Finnish and Turkish are agglutinating, almost exclusively suffixing and have vowel harmony systems. Georgian uses both prefixiation and suffixiation. In contrast, Navajo mainly makes use of prefixes with consonant harmony among its sibilants. Finally, Arabic is a templatic, nonconcatenative language.
For each language, we further add randomly sampled words from the respective Wikipedia dumps. We exclude tokens that are not exclusively composed from characters of the language's alphabet, e.g., digits, or do not appear at least 2 times in the corpus. The exact amount of unlabaled data added is treated as a hyperparameter depending on the number of available annotated examples and optimized on the development set, cf. Section 4.1. Evaluation is done on the official shared task test set.
Training, hyperparameters and evaluation. We mainly adopt the hyperparameters of (Kann and Schütze, 2016).
Embeddings are 300dimensional, the size of all hidden layers is 100 and for training we use ADADELTA (Zeiler, 2012) with a batch size of 20. We train all models which use 1 8 or more of the labeled data for 200 epochs, and models that see 1 16 and 1 32 of the original data for 400 and 800 epochs, respectively. In all cases, we apply the last model for testing.
We evaluate using two metrics: accuracy and edit distance. Accuracy reports the percentage of completely correct solutions, while the edit distance between the system's guess and the gold solution gives credit to systems that produce forms that are close to the right form.
Baselines. We compare our system to three baselines: The first one is MED 1 , the winning sys- Table 1: Accuracy (the higher the better) and edit distance (the lower the better) for our system and the three baselines on the official test set of task 3 of the SIGMORPHON 2016 shared task. Only the indicated amount (row labels) of the original training data is used, emulating a low-resource setting. Best results for each language in bold.
tem of the 2016 shared task. The network architecture is the same as in our system, but it is trained exclusively on labeled data. Thus, we expect it to suffer stronger from a lack of resources. The second baseline is the official SIGMOR-PHON 2016 shared task baseline (SIG16) (Cotterell et al., 2016), which is similar in spirit to the system described by Nicolai et al. (2015). The system treats the prediction of edit operations to be performed on the input string as a sequential decision-making problem, greedily choosing each edit action given the previously chosen actions. The selection of operations is made by an averaged perceptron, using the binary features described in (Cotterell et al., 2016). 2 Third, we compare to the baseline system of the CoNLL-SIGMORPHON 2017 shared task on universal morphological reinflection (SIG17) (Cotterell et al., 2017), which is extremely suitable for low-resource settings. It splits all source and target forms in the training set into prefix, middle part and suffix, and uses those to find prefix or suffix substitution rules. Every evaluation example is searched for the longest contained prefix or suffix and the rule belonging to the affix and given target tag is applied to obtain the output. Table 1, additionally training on unlabeled examples improves the performance of the encoder-decoder network for nearly all settings and languages, especially for the very low-resource scenarios with 1 16 and 1 32 of the training data. The biggest increase in accuracy can be seen for Russian and Spanish, both in the 1 32 setting, with 0.0963 (0.5023 − 0.4060) and 0.0992 (0.7564 − 0.6572), respectively. For the settings with bigger amounts 2 Note that our use of the system differs from the official baseline in that we perform a direct form-to-form mapping. The shared task system predicts first form-to-lemma and then lemma-to-form. However, we assume no lemmata to be given, and thus are unable to train such a system. of training data available, the unlabeled data does not change performance a lot. This was expected, as the model already gets enough information from the annotated data. However, semisupervised training never hurts performance, and can thus always be employed. Overall, our semisupervised training method shows to be a useful extension of the original system. Furthermore, there is only one case-Georgian, 1 16 -where any of the SIGMORPHON baselines outperforms the neural methods. This clearly shows the superiority of neural networks for the task and emphasizes the need to reduce the amount of labeled training data required for their training.
Amount of Unlabeled Data
We now consider the amount of unlabeled examples as a function of the number of annotated examples. Data and training regime are the same as in Section 3. This analysis is performed on the development set and we report the highest accuracy obtained during training. The resulting accuracies for Arabic and German can be seen in Figure 2. The other languages behave similarly to German. The loss of performance for reducing the training data varies a lot between languages, depending on how regular and thus "easy to learn" those are. Concerning the amount of unlabeled examples, it seems that even though in single cases other ratios are slightly better, using 4 times more unlabeled examples mostly obtains highest accuracy. Thus, a general rule could be that the more additional examples are used the better. The only exception is Arabic in the 1 32 setting, where using half as many unlabeled as labeled examples obtains much better results. We explain this with the Semitic language being templatic. Since words in Arabic paradigms do not share a connected stem, we expect that giving the model too much bias to copy might be harming performance in low-resource settings. However, even for low-resource Arabic, using a ratio of 1:4 of labeled to unlabeled examples still yields a better performance than not using unlabeled examples at all. Thus, we can conclude that if aiming for a language-independent setup, this is a good ratio.
Autoencoding of Random Strings
We expect the network to benefit from a bias to copy strings. This suggests that any random combination of characters from the language's alphabet could be autoencoded in order to improve the performance in low-resource settings. To verify this, we train models on new datasets with 1 32 of the labeled examples from task 3 of the SIGMOR-PHON 2016 shared task and the optimal number of unlabeled examples for each language, cf. §4.1. However, the unlabeled examples are now random strings of a length between 3 and 20. All models are trained as before. Accuracies on the official test sets are shown in Table 2, and compared to (i) training without unlabeled examples and (ii) the data being enhanced by corpus words. Several aspects of the results are eye-catching. First, for Arabic, the gap to the performance with corpus words is the biggest, showing that indeed the ar fi ka de nv ru es tu MED . 2628.3144 .8184 .6608 .1738 Table 2: Accuracies for MED (Kann and Schütze (2016)), MED+corpus and MED+random. Descriptions in the text. tendency of languages to copy the stem when inflecting is playing an important role. Second, for some languages the performance gains for corpus words and random words are comparable. Third, the performance of random strings is closer to the performance of corpus words the higher the overall accuracy is. The additional unlabeled examples might be acting as regularizers in this case.
Overall, this experiment shows clearly that giving the model a bias to copy strings helps for inflection in non-templatic languages, and that random strings can improve a network for MRI.
Related Work
For the SIGMORPHON 2016 and the CoNLL-SIGMORPHON 2017 shared tasks (Cotterell et al., 2016(Cotterell et al., , 2017, multiple MRI systems were developed, e.g., (Nicolai et al., 2016;Taji et al., 2016;Kann and Schütze, 2016;Aharoni et al., 2016;Östling, 2016;Makarov et al., 2017). Encoder-decoder neural networks (Cho et al., 2014a;Sutskever et al., 2014;Bahdanau et al., 2015) performed best, such that we extend them in this work. Earlier work on paradigm completion included (Faruqui et al., 2016;Nicolai et al., 2015;Durrett and DeNero, 2013). Work directly tackling MRI was more rare, e.g., (Dreyer and Eisner, 2009). Our work relates to the line of research on minimally supervised and unsupervised methods for morphology, e.g., Creutz and Lagus (2007) and Goldsmith (2001) presenting the unsupervised morphological segmentation systems Morfessor and Linguistica, or (Dreyer and Eisner, 2011;Poon et al., 2009;Snyder and Barzilay, 2008). However, none of those focused directly on MRI or on training neural networks for morphology. The only case we know of where this was done was work by Kann et al. (2017). They leveraged morphologically annotated data in a closely related high-resource language to reduce the need for labeled data in the target language. This works well for similar languages, but has the shortcoming to require annotations in such a language to be at hand. A similar approach was presented by Ha et al. (2016) for machine translation (MT).
Unlabeled corpora were used for semi-supervised training of models for MT, e.g., by Cheng et al. (2016); Vincent et al. (2010); Socher et al. (2011); Ramachandran et al. (2016). Those approaches differ from ours, due to a fundamental difference between the two tasks: For MRI, the source vocabulary and the target vocabulary are mostly the same. This makes it intuitive for MRI to train the final model jointly on MRI and autoencoding.
Conclusion
We presented a way of semi-supervised training of a state-of-the-art model for low-resource MRI, using words from an unlabeled corpus. We found that the best ratio of labeled to unlabeled data depends of the morphological typology of the language. Finally, we showed that autoencoding random strings also increases performance, for some languages as much as using corpus words. | 3,483.2 | 2017-05-17T00:00:00.000 | [
"Computer Science"
] |
Memes in the Wild: Assessing the Generalizability of the Hateful Memes Challenge Dataset
Hateful memes pose a unique challenge for current machine learning systems because their message is derived from both text- and visual-modalities. To this effect, Facebook released the Hateful Memes Challenge, a dataset of memes with pre-extracted text captions, but it is unclear whether these synthetic examples generalize to ‘memes in the wild’. In this paper, we collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset. We find that ‘memes in the wild’ differ in two key aspects: 1) Captions must be extracted via OCR, injecting noise and diminishing performance of multimodal models, and 2) Memes are more diverse than ‘traditional memes’, including screenshots of conversations or text on a plain background. This paper thus serves as a reality-check for the current benchmark of hateful meme detection and its applicability for detecting real world hate.
Introduction
Hate speech is becoming increasingly difficult to monitor due to an increase in volume and diversification of type (MacAvaney et al., 2019). To facilitate the development of multimodal hate detection algorithms, Facebook introduced the Hateful Memes Challenge, a dataset synthetically constructed by pairing text and images (Kiela et al., 2020). Crucially, a meme's hatefulness is determined by the combined meaning of image and text. The question of likeness between synthetically created content and naturally occurring memes is both an ethical and technical one: Any features of this benchmark dataset which are not representative of reality will result in models potentially overfitting to 'clean' memes and generalizing poorly to memes in the wild. Thus, we ask the question: How well do Facebook's synthetic examples (FB) represent memes found in the real world? We use Pinterest memes (Pin) as our example of memes in the wild and explore differences across three aspects: 1. OCR. While FB memes have their text preextracted, memes in the wild do not. Therefore, we test the performance of several Optical Character Recognition (OCR) algorithms on Pin and FB memes.
2. Text content. To compare text modality content, we examine the most frequent n-grams and train a classifier to predict a meme's dataset membership based on its text.
3. Image content and style. To compare image modality, we evaluate meme types (traditional memes, text, screenshots) and attributes contained within memes (number of faces and estimated demographic characteristics).
After characterizing these differences, we evaluate a number of unimodal and multimodal hate classifiers pre-trained on FB memes to assess how well they generalize to memes in the wild.
Background
The majority of hate speech research focuses on text, mostly from Twitter (Waseem and Hovy, 2016;Davidson et al., 2017;Founta et al., 2018;Zampieri et al., 2019). Text-based studies face challenges such as distinguishing hate speech from offensive speech (Davidson et al., 2017) and counter speech (Mathew et al., 2018), as well as avoiding racial bias (Sap et al., 2019). Some studies focus on multimodal forms of hate, such as sexist advertisements (Gasparini et al., 2018), YouTube videos (Poria et al., 2016), and memes (Suryawanshi et al., 2020;Zhou and Chen, 2020;Das et al., 2020). While the Hateful Memes Challenge (Kiela et al., 2020) encouraged innovative research on multimodal hate, many of the solutions may not generalize to detecting hateful memes at large. For example, the winning team Zhong (2020) exploits a simple statistical bias resulting from the dataset generation process. While the original dataset has since been re-annotated with fine-grained labels regarding the target and type of hate (Nie et al., 2021), this paper focuses on the binary distinction of hate and non-hate.
Pinterest Data Collection Process
Pinterest is a social media site which groups images into collections based on similar themes. The search function returns images based on userdefined descriptions and tags. Therefore, we collect memes from Pinterest 1 using keyword search terms as noisy labels for whether the returned images are likely hateful or non-hateful (see Appendix A). For hate, we sample based on two heuristics: synonyms of hatefulness or specific hate directed towards protected groups (e.g., 'offensive memes', 'sexist memes') and slurs associated with these types of hate (e.g., 'sl*t memes', 'wh*ore memes'). For non-hate, we again draw on two heuristics: positive sentiment words (e.g., 'funny', 'wholesome', 'cute') and memes relating to entities excluded from the definition of hate speech because they are not a protected category (e.g., 'food', 'maths'). Memes are collected between March 13 and April 1, 2021. We drop duplicate memes, leaving 2,840 images, of which 37% belong to the hateful category.
Extracting Text-and Image-Modalities (OCR)
We evaluate the following OCR algorithms on the Pin and FB datasets: Tesseract (Smith, 2007), EasyOCR (Jaded AI) and East (Zhou et al., 2017). Previous research has shown the importance of prefiltering images before applying OCR algorithms (Bieniecki et al., 2007). Therefore, we consider two prefiltering methods fine-tuned to the specific characteristics of each dataset (see Appendix B).
Unimodal Text Differences
After OCR text extraction, we retain words with a probability of correct identification ≥ 0.5, and remove stopwords. A text-based classification task using a unigram Naïve-Bayes model is employed to discriminate between hateful and non-hateful memes of both Pin and FB datasets.
Unimodal Image Differences
To investigate the distribution of types of memes, we train a linear classifier on image features from the penultimate layer of CLIP (see Appendix C) (Radford et al., 2021). From the 100 manually examined Pin memes, we find three broad categories: 1) traditional memes; 2) memes consisting of just text; and 3) screenshots. Examples of each are shown in Appendix C. Further, to detect (potentially several) human faces contained within memes and their relationship with hatefulness, we use a pre-trained FaceNet model (Schroff et al., 2015) to locate faces and apply a pre-trained DEX model (Rothe et al., 2015) to estimate their ages, genders, races. We compare the distributions of these features between the hateful/non-hateful samples. We note that these models are controversial and may suffer from algorithmic bias due to differential accuracy rates for detecting various subgroups. Alvi et al. (2018) show DEX contains erroneous age information, and Terhorst et al. (2021) show that FaceNet has lower recognition rates for female faces compared to male faces. These are larger issues discussed within the computer vision community (Buolamwini and Gebru, 2018).
Comparison Across Baseline Models
To examine the consequences of differences between the FB and Pin datasets, we conduct a preliminary classification of memes into hate and non-hate using benchmark models. First, we take a subsample of the Pin dataset to match Facebook's dev dataset, which contains 540 memes, of which 37% are hateful. We compare performance across three samples: (1) FB memes with 'ground truth' text and labels; (2) FB memes with Tesseract OCR text and ground truth labels; and (3) Pin memes with Tesseract OCR text and noisy labels. Next, we select several baseline models pretrained on FB memes 2 , provided in the original Hateful Memes challenge (Kiela et al., 2020). Of the 11 pretrained baseline models, we evaluate the performance of five that do not require further preprocessing: Concat Bert, Late Fusion, MMBT-Grid, Unimodal Image, and Unimodal Text. We note that these models are not fine-tuned on Pin memes but simply evaluate their transfer performance. Finally, we make zero-shot predictions using CLIP (Radford et al., 2021), and evaluate a linear model of visual features trained on the FB dataset (see Appendix D).
OCR Performance
Each of the three OCR engines is paired with one of the two prefiltering methods tuned specifically to each dataset, forming a total of six pairs for evaluation. For both datasets, the methods are tested on 100 random images with manually annotated text. For each method, we compute the average cosine similarity of the joint TF-IDF vectors between the labelled and cleaned 3 predicted text, shown in Tab. 1. Tesseract with FB tuning performs best on the FB dataset, while Easy with Pin tuning performs best on the Pin dataset. We evaluate transferability by comparing how a given pair performs on both datasets. OCR transferability is generally low, but greater from the FB dataset to the Pin dataset, despite the latter being more general than the former. This may be explained by the fact that the dominant form of Pin memes (i.e. text on a uniform background outside of the image) is not present in the FB dataset, so any method specifically optimized for Pin memes would perform poorly on FB memes.
Unimodal Text Differences
We compare unigrams and bigrams across datasets after removing stop words, numbers, and URLs. The bigrams are topically different (refer to Appendix E). A unigram token-based Naïve-Bayes classifier is trained on both datasets separately to distinguish between hateful and non-hateful classes. The model achieves an accuracy score of 60.7% on FB memes and 68.2% on Pin memes (random guessing is 50%), indicating mildly different text distributions between hate and non-hate. In order to understand the differences between the type of language used in the two datasets, a classifier is trained to discriminate between FB and Pin memes (regardless of whether they are hateful) based on the extracted tokens. The accuracy is 77.4% on a balanced test set. The high classification performance might be explained by the OCR-generated junk text in the Pin memes which can be observed in a t-SNE plot (see Appendix F).
Unimodal Image Differences
While the FB dataset contains only "traditional memes" 4 , we find this definition of 'a meme' to be too narrow: the Pin memes are more diverse, containing 15% memes with only text and 7% memes which are screenshots (see Tab. 2). Tab. 3 shows the facial recognition results. We find that Pin memes contain fewer faces than FB memes, while other demographic factors broadly match. The DEX model identifies similar age distributions by hate and non-hate and by dataset, with an average of 30 and a gender distribution heavily skewed towards male faces (see Appendix G for additional demographics). Surprisingly, we find that the CLIP Linear Probe generalizes very well, performing best for all three samples, with superior performance on Pin memes as compared to FB memes. Because CLIP has been pre-trained on around 400M imagetext pairs from the Internet, its learned features generalize better to the Pin dataset, even though it was fine-tuned on the FB dataset. Of the multimodal models, Late Fusion performs the best on all three samples. When comparing the performance of Late Fusion on the FB and Pin OCR samples, we find a significant drop in model performance of 12 percentage points. The unimodal text model performs significantly better on FB with the ground truth annotations as compared to either sample with OCR extracted text. This may be explained by the 'clean' captions which do not generalize to realworld meme instances without pre-extracted text.
Discussion
The key difference in text modalities derives from the efficacy of the OCR extraction, where messier captions result in performance losses in Text BERT classification. This forms a critique of the way in which the Hateful Memes Challenge is constructed, in which researchers are incentivized to rely on the pre-extracted text rather than using OCR; thus, the reported performance overestimates success in the real world. Further, the Challenge defines a meme as 'a traditional meme' but we question whether this definition is too narrow to encompass the diversity of real memes found in the wild, such as screenshots of text conversations.
When comparing the performance of unimodal and multimodal models, we find multimodal mod-els have superior classification capabilities which may be because the combination of multiple modes create meaning beyond the text and image alone (Kruk et al., 2019). For all three multimodal models (Concat BERT, Late Fusion, and MMBT-Grid), the score for FB memes with ground truth captions is higher than that of FB memes with OCR extracted text, which in turn is higher than that of Pin memes. Finally, we note that CLIP's performance, for zero-shot and linear probing, surpasses the other models and is stable across both datasets.
Limitations Despite presenting a preliminary investigation of the generalizability of the FB dataset to memes in the wild, this paper has several limitations. Firstly, the errors introduced by OCR text extraction resulted in 'messy' captions for Pin memes. This may explain why Pin memes could be distinguished from FB memes by a Naïve-Bayes classifier using text alone. However, these errors demonstrate our key conclusion that the preextracted captions of FB memes are not representative of the appropriate pipelines which are required for real world hateful meme detection.
Secondly, our Pin dataset relies on noisy labels of hate/non-hate based on keyword searches, but this chosen heuristic may not catch subtler forms of hate. Further, user-defined labels introduce normative value judgements of whether something is 'offensive' versus 'funny', and such judgements may differ from how Facebook's community standards define hate (Facebook, 2021). In future work, we aim to annotate the Pin dataset with multiple manual annotators for greater comparability to the FB dataset. These ground-truth annotations will allow us to pre-train models on Pin memes and also assess transferability to FB memes.
Conclusion
We conduct a reality check of the Hateful Memes Challenge. Our results indicate that there are differences between the synthetic Facebook memes and 'in-the-wild' Pinterest memes, both with regards to text and image modalities. Training and testing unimodal text models on Facebook's pre-extracted captions discounts the potential errors introduced by OCR extraction, which is required for real world hateful meme detection. We hope to repeat this work once we have annotations for the Pinterest dataset and to expand the analysis from comparing between the binary categories of hate versus non-hate to include a comparison across different types and targets of hate.
A Details on Pinterest Data Collection
Tab. 5 shows the keywords we use to search for memes on Pinterest. The search function returns images based on user-defined tags and descriptions aligning with the search term (Pinterest, 2021). Each keyword search returns several hundred images on the first few pages of results. Note that Pinterest bans searches for 'racist' memes or slurs associated with racial hatred so these could not be collected. We prefer this method of 'noisy' labelling over classifying the memes with existing hate speech classifiers with the text as input because users likely take the multimodal content of the meme into account when adding tags or writing descriptions. However, we recognize that user-defined labelling comes with its own limitations of introducing noise into the dataset from idiosyncratic interpretation of tags. We also recognize that the memes we collect from Pinterest do not represent all Pinterest memes, nor do they represent all memes generally on the Internet. Rather, they reflect a sample of instances. Further, we over-sample non-hateful memes as compared to hateful memes because this distribution is one that is reflected in the real world. For example, the FB dev set is composed of 37% hateful memes. Lastly, while we manually confirm that the noisy labels of 50 hateful and 50 non-hateful memes (see Tab. 6), we also recognize that not all of the images accurately match the associated noisy label, especially for hateful memes which must match the definition of hate speech as directed towards a protected category. Table 5: Keywords used to produce noisily-labelled samples of hateful and non-hateful memes from Pinterest.
Noisy Label Keywords
Hate "sexist", "offensive", "vulgar", "wh*re", "sl*t", "prostitute" Non-Hate "funny", "wholesome", "happy", "friendship", "cute", "phd", "student", "food", "exercise" East (Zhou et al., 2017) is an efficient deep learning algorithm for text detection in natural scenes. In this paper East is used to isolate regions of interest in the image in combination with Tesseract for text recognition. Figure 4 shows the dominant text patterns in FB (a) and Pin (b) datasets, respectively. We use a specific prefiltering adapted to each pattern as follows.
B.2 OCR Pre-filtering
FB Tuning: FB memes always have a black-edged white Impact font. The most efficient prefiltering sequence consists of applying an RGB-to-Gray conversion, followed by binary thresholding, closing, and inversion. Pin Tuning: Pin memes are less structured than FB memes, but a commonly observed meme type is text placed outside of the image on a uniform background. For this pattern, the most efficient prefiltering sequence consists of an RGB-to-Gray conversion followed by Otsu's thresholding.
The optimal thresholds used to classify pixels in binary and Otsu's thresholding operations are found so as to maximise the average cosine similarity of the joint TF-IDF vectors between the labelled and predicted text from a sample of 30 annotated images from both datasets.
C.1 Data Preparation
To prepare the data needed for training the ternary (i.e., traditional memes, memes purely consisting of text, and screenshots) classifier, we annotate the Pin dataset with manual annotations to create a balanced set of 400 images. We split the set randomly, so that 70% is used as the training data and the rest 30% as the validation data. Figure 2 shows the main types of memes encountered. The FB dataset only has traditional meme types.
C.2 Training Process
We use image features taken from the penultimate layer of CLIP. We train a neural network with two hidden layers of 64 and 12 neurons respectively with ReLU activations, using Adam optimizer, for 50 epochs. The model achieves 93.3% accuracy on the validation set.
D Classification Using CLIP D.1 Zero-shot Classification
To perform zero-shot classification using CLIP (Radford et al., 2021), for every meme we use two prompts, "a meme" and "a hatespeech meme". We measure the similarity score between the image and text embeddings and use the corresponding text prompt as a label. Note we regard this method as neither multimodal nor uni-modal, as the text is not explicitly given to the model, but as shown in (Radford et al., 2021), CLIP has some OCR capabilities. In a future work we would like to explore how to modify the text prompts to improve performance.
D.2 Linear Probing
We train a binary linear classifier on the image features of CLIP on the FB train set. We train the classifier following the procedure outlined by (Radford et al., 2021). Finally, we evaluate the binary classifier of the FB dev set and the Pin dataset.
In all experiments above we use the pretrained ViT-B/32 model.
F T-SNE Text Embeddings
The meme-level embeddings are calculated by (i) extracting a 300-dimensional embedding for each word in the meme, using fastText embeddings trained on Wikipedia and Common Crawl; (ii) averaging all the embeddings along each dimension. A T-SNE transformation is then applied to the full dataset, reducing it to two-dimensional space. After this reduction, 1000 text-embeddings from each category-FB and Pin -are extracted and visualized. The default perplexity parameter of 50 is used. Fig.3 presents the t-SNE plot (Van Der Maaten and Hinton, 2008), which indicates a concentration of multiple embeddings of the Pin memes within a region at the bottom of the figure. These memes represent those that have nonsensical word tokens from OCR errors. To evaluate memes with multiple faces, we develop a self-adaptive algorithm to separate faces. For each meme, we enumerate the position of a cutting line (either horizontal or vertical) with fixed granularity, and run facial detection models on both parts separately. If both parts have a high probability of containing faces, we decide that each part has at least one face. Hence, we cut the meme along the line, and run this algorithm iteratively on both parts. If no enumerated cutting line satisfies the condition above, then we decide there's only one face in the meme and terminate the algorithm. | 4,547.2 | 2021-07-09T00:00:00.000 | [
"Computer Science"
] |
The dynamical balance, transport and circulation of the Antarctic Circumpolar Current
The physical elements of the circulation of the Antarctic Circumpolar Current (ACC) are reviewed. A picture of the circulation is sketched by means of recent observations from the WOCE decade. We present and discuss the role of forcing functions (wind stress, surface buoyancy flux) in the dynamical balance of the flow and in the meridional circulation and study their relation to the ACC transport. The physics of form stress at tilted isopycnals and at the ocean bottom are elucidated as central mechanisms in the momentum balance. We explain the failure of the Sverdrup balance in the ACC circulation and highlight the role of geostrophic contours in the balance of vorticity. Emphasis is on the interrelation of the zonal momentum balance and the meridional circulation, the importance of diapycnal mixing and eddy processes. Finally, new model concepts are described: a model of the ACC transport dependence on wind stress and buoyancy flux, based on linear wave theory; and a model of the meridional overturning and the mean density structure of the Southern Ocean, based on zonally averaged dynamics and thermodynamics with eddy parametrization.
brought about by the deep reaching and strong zonal current these same characteristics of the ACC act to limit meridional exchange and tend to isolate the ocean to the south from heat and substance sources in the rest of the world ocean.
The ACC system is sketched in Fig. 1 by its major fronts. These are traced by the regionally (and temporally, see Section 3) highly variable surface temperature gradient displayed in Fig. 2, which shows that the ACC is a fragmented system of more or less intense jet streams. The thermal fronts have a close correspondence in density and extend to depth, in most places to the bottom (see Fig. 3), but can also be correlated with surface elevations as detected in satellite altimetry data (e.g. Gille 1994). The ACC resides mainly in the two circumpolar fronts, the sub-Antarctic Front and the Polar Front, which, due to regional and temporal variability, appear as multiple branches in the hydrographic section of Fig. 3. From Fig. 3 it becomes apparent that watermass properties do penetrate across the ACC and, in fact, there is a prominent meridional circulation associated with the predominantly zonal ACC. It was described as early as 1933 by Sverdrup (see also Sverdrup et al. 1942) and has lately been interpreted as the Southern Ocean part of the 'global conveyor belt' circulation (Gordon 1986, Broecker 1991, Schmitz 1995. The 'sliced cake' view of the Southern Ocean watermasses and their propagation shown in Fig. 4 (Gordon 1999) Orsi et al. (1995). From Hughes & Ash (2001).
Fig. 4.
A 'cake' with large slice removed view of the Southern Ocean. Isotherms of the annual average sea surface temperature (SST°C) are shown on the plane of the sea surface. The core of the eastward flowing ACC and associated polar front occurs near the 4°C isotherm. The right plane of the slice shows salinity (S). These data are derived from the oceanographic observations along the Greenwich Meridian shown on the floor of the figure (dots, from Fahrbach et al. 1994). Deep relatively saline water, > 34.7 (Circumpolar Deep Water CDW, arrow) spreads poleward and upwells towards the sea surface. It is balanced by a northward flow of lower salinity waters, < 34.4 near 1000 m (Antarctic Intermediate Water AAIW, arrow) and by sinking of slightly lower salinity water along the continental slope of Antarctica (arrow). This process (salty water in, fresher water out) removes the slight excess of regional precipitation from the Southern Ocean. Along the left plane is temperature (T°C) based on data collected at the same points as used for the salinity section. Shallowing of the isotherms is evident as the deep water rises up towards the sea surface. There it is cooled and sinks flooding the bottom layers with waters of less than 0°C. This cold bottom water spreads well into the global oceans (Antarctic Bottom Water AABW). Along the outer edge of the figure, latitude 35°S, is salinity (S). The low salinity water (AAIW) is shown as the less than 34.45 band near 1000 m. More saline deep water is seen spreading southward near the 4000 m depth. From Gordon (1999).
Fig. 5.
Left: section integrated (south to north) baroclinic transport relative to the deepest common level for SR1 (Drake Passage) and SR3 (Australia to Antarctica) from various hydrographic section along SR1 and SR3 between 1991 and 2001. The location and names for the fronts are according to Orsi et al. (1995). Right: transport in neutral density classes. For SR3 the data of the various surveys are shown as bins, for SR1 only the mean transport profile of the cruises is given (full line). From Cunningham et al. (2003) and Rintoul & Sokolov (2001).
(CDW) and AABW. The figure also depicts the circulation and nicely illustrates one of the central questions of Southern Ocean science: how do water properties (heat, salt, nutrients and other chemical substances) cross the strong deep reaching ACC? We will address this question of mass and property balances in Section 5, which is closely connected to the question of the dynamical balance of the ACC (treated in Section 3): what are the forces driving the zonal current, which act as brake and what physical mechanisms are responsible for the deep reaching current profile? It should be borne in mind that the answer to these questions will not necessarily give insight into the problem of the relation of the magnitude of the zonal transport of water by the ACC to the flux of (zonal) momentum through the system (as the balanced transfer of money to and from a bank account does not determine the account balance). We discuss the dependence of transport on forcing functions and present a new simple linear transport model (Section 4).
In this paper we review some of the concepts and theories which are currently discussed for the circulation of the Antarctic Current system. It expands and complements other recent reviews of the ACC system (e.g. Olbers 1998) and summaries of the ACC dynamics contained in original research articles (e.g. Gnanadesikan & Hallberg 2000, Tansley & Marshall 2001). But we should point out that this review does not cover all the current research on the ACC, e.g. we do not address regional properties of the ACC system and its temporal variability; we not report on teleconnections and links of the Southern Ocean with the global ocean circulation and the possible dependence of the stratification and transport of the ACC on remote conditions and mechanisms.
The zonal transport
The meridional momentum balance of the ACC is basically geostrophic, i.e. the zonal current velocity (at each geopotential level) is related to the meridional pressure gradient, resulting from a dip of about 1.5 m (from north to south) of the sea surface across the current system, and the gradient of density in the fronts as, for example, can be inferred from temperature and salinity in the SR3 section in Fig. 3. The surface pressure gradient yields an overall eastward surface velocity and the mass stratification yields a positive shear (u g ) z = (g/f)ρ y of the geostrophic part of the current 1 , the velocity thus diminishes with depth but generally not as strongly as to imply a reversal of the flow. The above 'thermal wind relation' is utilized to infer from hydrographic section data the 'baroclinic' transport (normal to the section and referred to a common depth) or the DCL transport (referred to the bottom depth or deepest common level (DCL) for station pairs) of the ACC. Various attempts have been made to determine the absolute or net transport by taking reference velocities from moorings or LADCP (Lowered Acoustic Doppler Current Profiler) or by levelling bottom pressure gauges (see the recent discussion of Cunningham et al. 2003). Prior to WOCE most efforts were made in the International Southern Ocean Studies (ISOS) experiment at Drake Passage (Whitworth 1983, Whitworth & Peterson 1985. More recently, transport estimations have been made at Drake Passage (WOCE SR1) and the section between Australia and Antarctica (WOCE SR3, see Fig. 3) at 140°E, where multiple surveys have been made during WOCE.
The average DCL transport of SR1 for six hydrographic sections across Drake Passage (see Fig. 5) is 136.7 ± 7.8 Sv, with about equal contributions from the Polar Front (57.5 ± 5.7 Sv) and the sub-Antarctic Front (53 ± 10 Sv) . The analysis of ISOS and WOCE data (spanning 25 years) gave no indication of significant trends or unsteadiness. Following Rintoul & Sokolov (2001) the mean transport south of Australia at SR3 is 147 ± 10 Sv (relative to a 'best guess' reference level: at the bottom except near the Antarctic margin, where a shallower level is used consistent with westward flow over the continental slope and rise). It is about 13 Sv larger than the ISOS estimate of absolute transport through Drake Passage and about 10 Sv larger than the SR1 DCL transport (see Fig. 5). The transport south of Australia must be larger than that at Drake Passage to balance the Indonesian throughflow, which is believed to be of order 10 Sv. However, given the remaining uncertainty in the barotropic flow at both locations, the agreement is likely to be fortuitous. Variability of transport at SR3 has been detected in a six year record of repeat hydrographic section (Rintoul et al. 2002). It is fairly small (1-3 Sv). Figure 5 shows also the contribution of transport in the main classes of watermasses. In both sections the CDW range carries most of the zonal transport and no systematic temporal change of the relative contribution could be detected.
Monitoring the transport through Drake Passage is by now a standard diagnostic of numerical global ocean models. The resulting values are spread over a large range from well under 100 Sv to well over 200 Sv. The reasons for this diversity are not fully understood. But wind forcing and thermohaline processes (Cai & Baines 1996, Gent et al. 442 D. OLBERS et al. (Best et al. 1999), as well as the parametrization of subgrid scale tracer fluxes (e.g. Danabasoglu &Fritzsch et al. 2000) are known to be important factors. Most eddy resolving models (or 'permitting' since the achieved resolution does not resolve all of the relevant eddy scales) yield transport values closer to but mostly above the observations (see Table I).
Integrating the thermal wind balance (u g ) z = (g/f)ρ y twice vertically we get (1) for the geostrophic transport (per unit length along the The geostrophic velocity at the bottom can be expressed by the gradient of pressure taken at the bottom, u g (z = -h) = -(py) -h /f, which can be combined with the second term to yield (2) In this relation the geostrophic transport is expressed by the gradients of bottom pressure p b = p(z = -h) and the density moment which is the total baroclinic potential energy (referred to z = 0). But U g is not the total transport, which is the integral of absolute velocity from top to bottom. The total volume transport through a section also contains the Ekman transports (due to wind stress and There are two prominent regions where geostrophic contours are blocked by continents: the region between Australia and Antarctica, and Drake Passage between South America and Antarctica. Here, the ACC must cross geostrophic contours. Although the geostrophic contours are not blocked there, the ACC also crosses the geostrophic contours at the East Pacific Rise and several other locations. From Borowski et al. (2002). frictional bottom stress) normal to the respective section and other contributions induced by nonlinearities and lateral friction. In large-scale currents the latter two are usually small and for north-south sections the Ekman contributions can be neglected (for predominantly zonal winds). But turning from the section coordinate y to the general case we add to the geostrophic transport the Ekman parts and get the expression for the total transport U in vector form, (3) The total transport conserves mass and is thus representable by a streamfunction ψ. With the approximation made so far (neglecting lateral stresses and nonlinearities) the above equation is the vertically integrated balance of momentum.
The relative importance of the different pressure gradient contributions in Eq. (1) or (2) to geostrophic transport has been addressed by Borowski et al. (2002). They argue on the basis of the balances of barotropic momentum and vorticity that the deep transport h(Lp) -h in Eq. (3) across geostrophic contours f/h should be small if these are blocked by continents (as in Drake Passage and other places in the path of the ACC, see lowest panel of Fig. 6). Then, neglecting the variation of f on the lhs of Eq. (3) and the deep and the Ekman transports on the rhs and integrating along a section of constant bathymetry h = const, we find that the transport normal to such a contour is related to the difference of baroclinic potential energy between the ends, i.e. f 0 ∆ψ~ ∆χ, a relation which we encounter again in Section 3.5. In models with simplified geometry such conditions can easily be established. In a series of experiments with zonal channel geometry (see Fig. 20), but also in global coarse resolution OGCMs, Borowski et al. (2002) could grossly verify the relation (4) between the meridional gradients of streamfunction and potential energy. Figure 6 compares the transport pattern (upper panel) and its reconstruction via Eq. (4) (middle panel) in a global coarse resolution OGCM. While there are clear differences in the closed basins of the major oceans, the overall agreement of the streamfunction and its reconstruction is rather good within the ACC region (ψ and its reconstructed values from the gradient of potential energy coincide within 10%). By and large the contribution from the bottom pressure gradient to the transports is thus small.
What is so special about the dynamics of the ACC?
The zonal periodicity of the Southern Ocean, creating a circumpolar pathway of watermasses to circle the globe and allowing the ACC to play a major part in the conveyor belt circulation has already been mentioned. But the zonality also acts as a brake. In the basins which are zonally blocked by continents there is a meridional exchange of heat accomplished by the time mean gyre current systems. There is no such mean transport of heat across the latitudes of the ACC (DeSzoeke & Levine 1981). Instead, the loss of heat from the ocean in the area south of the ACC must be carried across the current by smaller-scale and/or time varying features in the current field, usually summarized as the meso-scale eddy field. Transient eddies with scales tens to a few hundred kilometres (much larger than the baroclinic Rossby radius which is of order 10 km in the Southern Ocean, Houry et al. 1987) are a very prominent feature along the path of the ACC. There are also small stationary features, sometimes attached to outstanding topographic peculiarities, which scale in the category mesoscale eddies. With exception of the western boundary currents in the subtropical gyres the variance of transient features in the ACC dominates the global distribution of variability of surface displacement obtained from satellite altimetry, particular in areas of shallow rough topographic obstacles (e.g. Gille et al. 2000) and meridional excursions in the path of the flow. Estimates of the meridional eddy heat flux from a number of moored instruments confirmed the southward transfer with sufficient magnitude to close the overall heat budget (see Fig. 7). Recently Gille (2003)
Transient and standing eddies
In the description of the global atmospheric circulation it is custom to reduce the information contained in observations by considering zonally averaged time mean fields and deviations from it (see e.g. Peixoto & Oort 1992), e.g. the meridional velocity v is split into its zonal-plus-time mean and deviation v* so that v = + v*. The mean meridional heat flux (divided by ρc p ) is then which identifies a flux achieved by the mean fields and a flux induced carried by the covariance of the deviation fields (the 'eddies'). Clearly, the combined zonal-plus-time mean of v* vanishes, i.e. = 0, but the * ν Motivated by the zonal unboundedness of the ACC this separation of fields and covariances has been applied to data (from models since synoptic maps of ACC properties do not exist) for the belt of latitudes passing Drake Passage (e.g. Killworth & Nanneh 1994, Stevens & Ivchenko 1997, Olbers & Ivchenko 2001. We elucidate the typical results of zonal averaging using results from the global ocean POP model (Parallel Ocean Program, see Maltrud et al. 1998) which marginally resolves the transient eddy field with a resolution of roughly 6.5 km in polar latitudes. The time mean sea surface topography of the simulation is shown in Fig. 8, revealing a quite realistic ACC (compare to Fig. 2) as a collection of strong, regionally bounded jets which break up at topographic features and in summary pass Drake Passage but do not at all follow the corresponding belt of latitudes. Consequently, in the zonally averaged picture ( Fig. 9 upper panels) many details of the ACC current system are lost. The averaged state in the Drake Passage band of latitudes is picked from the stronger features at the southern rim of the ACC in that latitude interval and thus misses most of its circumpolar structure. Moreover, the average does not at all represent the local structure of the current in Drake Passage. The transport of the ACC through Drake Passage is 130 Sv in this POP simulation, which is very close to observations. In contrast, the transport of the zonally averaged current is only about 50 Sv.
Most of the ACC actually finds its representation in the averaged picture at latitudes north of the Drake Passage belt (see for example the zonally averaged zonal current in Fig. 9). As a consequence we have standing eddy contributions which are strong compared to the transient eddy contributions. This is exemplified in Fig. 9 by the eddy density flux and +v'ρ',, respectively. In the Drake Passage belt these fluxes are of comparable size and northward -at blocked latitudes -the flux of the standing eddies overwhelms the flux of the transient eddies by an order of magnitude.
The lower suite of panels in Fig. 9 displays the same fields using an alternative average which is oriented along the contours of the time mean sea surface height (SSH; the POP code has a free surface implemented): we show the mean tangential velocity and the component of the density flux by standing and transient eddies which is normal to the SSH contours ('standing' now refers to the deviations from the convoluted path). This path following average clearly captures more of the properties in the ACC region than a zonal average. A similar streamwise average analysis is presented in Ivchenko et al. (1998) and Best et al. (1999). The mean tangential velocity collects all jets into a strong current -surprisingly with one single core. It is eastward everywhere and centred at the height contour -0.5 m (mean latitude of -49°) with the highest speeds at the surface of about 0.2 m s -1 which is two times the maximum of the zonally averaged zonal velocity. The eddy fluxes, shown in the middle and left panels, demonstrate that time mean and transient field are separated to a large extent: the standing eddy component is still non-zero (because the flow slightly turns with depth) but is clearly much diminished compared with the zonal mean and negligible compared to the transient component of the path following mean.
In summary, we conclude that the zonal average does not separate the time mean and the transient motion in a simple way. Zonal mean, standing eddy and transient eddy components arise and the standing eddy component is a major player. Dynamically it belongs to the time mean flow but it overrides the transient component. When analyzing dynamical balances in the latitudinal-longitudinal coordinate system standing and transient components have their physical meaning (e.g. in the balance of zonal momentum which will be discussed in many places in this paper). But building models of a mean circulation in a zonally average framework is inherently hampered by an inadequate treatment of the standing eddy component. Because it is intractable to parameterizations it is generally neglected but is larger than the transient component for which reasonable parameterizations are known (Johnson & Bryden 1989).
The path following (or convoluted) average produces a much clearer separation of the flow into time mean and transient components. In this framework the coordinate system is attached to the specific flow (the model SSH contours in the above example). Analysing balances or setting up models in the convoluted average frame is conceptually simpler because standing eddies can be neglected but the fields (velocities, fluxes etc) are oriented at the convoluted coordinates. For instance, in a convoluted average analysis we would consider the balance of the along-stream component of momentum with the alongstream component of wind stress entering, rather than the balance of zonal momentum. In the course of the paper we will frequently have recourse to these different averaging concepts.
Interfacial and bottom form stress
Eddies not only carry heat in the mean but also establish a transfer of momentum as well. While lateral eddy momentum fluxes turned out to be rather small (compared to the wind-stress) and indifferent in sign (Morrow et al. 1994, Phillips & Rintoul 2000, Hughes & Ash 2001 the ACC is the outstanding example in the ocean for diapycnal transport of momentum by eddies. Since the momentum imparted to the ocean surface layer by the strong zonal winds in the Southern Ocean cannot be balanced in the Drake Passage belt by large scale zonal pressure gradientsa consequence of the lack of zonal boundaries -and because lateral Reynolds stresses are too small for significant transport away from the ACC towards boundaries, a downward transfer of momentum is the only mechanism to prevent indefinite acceleration of the zonal flow in the surface layer. The diapycnal momentum transport cannot be achieved by small-scale three-dimensional turbulence (it would require viscosities far too large); it must be done by the eddies. Depending on the framework -layer or level coordinates -different kinds of eddy terms arise in the dynamical balances. We first present the dynamical balances for a layer framework, and later describe the corresponding physics in level coordinates (at the end of Section 3.3 and in Section 4.1).
The most important mechanism of momentum transfer in layer coordinates is the eddy interfacial form stress (IFS). It operates everywhere in the ocean where eddies are present and deform isopycnals but the unique sign and magnitude in the vast circumpolar area is truly outstanding. IFS transfers horizontal momentum across inclined (by eddies) isopycnals by fluctuations (by eddies) in the zonal pressure gradient. Imagine two interfaces (isopycnals) z = -d 1 (x) and z = -d 2 (x) along any circumpolar path with coordinate x along it and integrate the (negative) pressure gradient -p x between the interfaces and around the path, (5) to get its contribution to the rate of change of x-momentum in the corresponding volume. The pressure is taken at the isopycnal depth and its gradient appearing in the second formulation thus acts across the inclined isopycnal. Obviously, to get a non-zero (the overbar indicates the path and time mean) the pressure must vary at the isopycnal depth in a way that an out-of-phase part with respect to the depth variations is present (see Fig. 10). Evidently, the strip of ocean gains x-momentum by the amount from the fluid above z = -d 1 (x) and loses to the fluid below z = -d 2 (x). Thus, for infinitesimally distant isopycnals the vertical divergence of the interfacial form stress IFS = enters the momentum balance. The mean depth is not relevant, only the eddy component contributes (also for pressure) so that
IFS =
The starred quantities contain the signal from the time-mean 'standing' eddies and the signal from the time varying transient eddies and the IFS may be separated accordingly.
Equating the zonal pressure gradient with the northward geostrophic velocity, fv* g = p* x , and the layer depth fluctuation with (potential) temperature anomaly, , we find that the IFS relates to the meridional eddy heat flux, A poleward eddy flux of heat is just a downward transport of zonal momentum by IFS in the water column. These processes are strictly coupled. The transient eddies which carry the poleward heat flux shown in Fig. 7 thus establish a downward transport of momentum.
In summary, though horizontal pressure gradients can only establish a transfer of horizontal momentum in horizontal direction they do transport horizontal momentum across tilted surfaces from one piece of ocean to another. A layer bounded by tilted isopycnals is thus forced by stresses (IFS) at the bounding top and bottom surfaces (in the same way as the Ekman layer is driven by frictional stresses at top and bottom).
Deriving the relation Eq. (5) it was assumed that the isopycnal strip does not run into the bottom nor touches the sea surface. If this situation occurs additional pressure terms arise from the bounding outcrops. These terms present a flux of horizontal momentum through these boundaries into the strip. For the bottom contact the corresponding flux is part of the bottom form stress (see below).
Notice that the same mathematical operations used to derive Eq. (5) apply if the interface is solid as, for example, the ocean bottom at z = -h(x). The bottom form stress BFS = operates here to transfer zonal momentum out of the fluid to the solid earth (since h is constant in time only the time mean bottom pressure is relevant). BFS works everywhere in the ocean where the submarine ocean bed is inclined but to be of significance the gradient of the bottom pressure (the normal geostrophic velocity) must be correlated to the ocean depth variations, or vice versa: the bottom pressure must be out of phase with the depth along the respective circumpolar path, e.g. there must be high x dp CIRCULATION OF ANTARCTIC CIRCUMPOLAR CURRENT 447 Fig. 10. Schematic demonstrating the interfacial form stress for an isopycnal interface in the water (shown is the zonal depth). There is higher pressure at the depth of the density surface where it is rising to the east compared with where it is deepening to the east. This results in an eastward pressure force (interfacial form stress) on the water below. This is related to the fact that the northward flow occurs where the vertical thickness of water above the density surface is small, and southward flow where the thickness is large, so there is a net southward mass flux at lighter densities due to the geostrophic flow. The same kind of pressure force acting on the sloping bottom topography leads to the bottom form stress. Redrawn from .
pressure at rising topography and low pressure at the opposite falling slope to the east to let eastward momentum leak out to the earth. A depth-pressure correlation can in fact be seen in circumpolar hydrographic sections passing through Drake Passage around Antarctica, as shown in Fig. 11. From the density ρ we can infer the baroclinic bottom pressure contained in the mass stratification. It is obvious in the section that there is more lighter water to the west of the submarine ridges than to the east. Surprisingly the BFS derived from such a pattern accelerates the eastward current, acting thus in cooperation with the eastward wind stress -a feature of the ACC dynamics which will be reconsidered in the course of this paper.
The dynamical balance of the zonal flow
The IFS and BFS contributions to the physics of zonal currents can be elucidated by a simple conceptual model. Consider a strip of ocean from Antarctica to the northern rim of the ACC and split the water column into three layers (which may be stratified), separated by interfaces which ideally are isopycnals (see Figs 11 & 12). The upper layer from the sea surface z = η 0 = ζ to some isopycnal at depth z = -η 1 and includes the Ekman layer, the intermediate layer with base at z = -η 2 lies above the highest topography in the Drake Passage belt (the range of latitudes which run through Drake Passage), and the lower layer reaches from z = -η 2 to the ocean bottom at z = -η 3 = -h. We apply a time and zonal average to the balance equations of zonal momentum in the three layers and use Eq. (5) to get (6) where the depth and zonally integrated northward volume flux in each layer are denoted by i = 1, 2, 3. Furthermore, p i is the pressure at the respective layer depths, p 3 = p b the bottom pressure and the overbar denotes time and ACC path following mean. Note that the surface term drops out in the first equation because the surface pressure is p 0 = gζ. As before, the star denotes the deviation from this average, τ 0 is the wind stress, τ i the frictional stresses at interfaces, τ 3 = τ b the frictional bottom stress, and R i = the divergence of the appropriate lateral Reynolds stress. We will assume that the interfacial friction stresses τ 1 , τ 2 and the R i can be neglected (which is confirmed by measurements, e.g. Phillips & Rintoul 2000, and eddy resolving models). The meridional circulation is defined by transports between isopycnals and is thus of Lagrangian quality. The wind-driven component -τ 0 /f (the Ekman transport) in the top layer, a similar frictional transport τ b /f in the bottom layer, and a geostrophic component in the bottom layer, associated with the bottom form stress also appear if the flow is averaged between geopotential (constant depth) levels (see end of this section). These Eulerian quantities form the Deacon cell of the Southern Ocean meridional overturning (see Döös & Webb 1994 and Fig. 24).
Since Σ i = 0 by mass balance of the ocean part to the south (neglecting the very small effect of precipitation and evaporation on mass balance) the overall balance of zonal momentum is between the applied wind stress, the bottom form stress and the frictional stress on the bottom, The frictional stress τ b and the here neglected Reynolds stresses are generally small in the ACC. Munk & Palmen (1951) were the first to discuss this balance of momentum for the ACC (but surprisingly, much of the research on the ACC after Munk & Palmen's article had forgotten the importance of the bottom stress and tried frictional balances, e.g. Hidaka & Tsuchiya (1953), Gill (1968)). Hence, the momentum put into the ACC by wind stress is transferred to the solid earth by bottom form stress. The transfer is at the same latitude because the divergence of the Reynolds stress is small. This balance has been confirmed in most numerical models which include submarine topographic barriers in the zonal flow and have a realistic (small) magnitude of the Reynolds stress divergence. If the ocean bottom is flat (in models the bottom can be made flat) either bottom friction may get importance and/or the neglected Reynolds terms could come into play. Eddy effects seem to be unimportant in the vertically integrated balance but it is worth mentioning that most coarse OGCMs do not confirm Eq. (7), see Cai & Baines (1996). The reason is that such models use very large lateral viscosities so that the parameterized Reynolds stresses become large, even though the simulated current is broad and smooth. Figure 13 exemplifies the total zonal momentum balance with results from the eddy resolving POP model (Parallel Ocean Program, Maltrud et al. 1998). It is instructive to write the pressure p as sum of the baroclinic (density related) part and the barotropic (surface related) part gζ. While the total bottom form stress clearly takes out the momentum put in the ocean by wind stress we have seen above in Fig. 11 that the baroclinic part does not have the corresponding sign: according to the phase shift of density with respect to the submarine topography the baroclinic bottom form stress should accelerate the eastward current. Indeed, this has been found in the analysis of the eddy permitting model FRAM (Fine Resolution Antarctic Model, Fram Group 1991). The right hand panel of Fig. 13 displays the balance with the pressure terms and separated. Individually they are much larger than the zonal wind stress, by about an order of magnitude, but of opposite sign and thus they nearly cancel. This feature in the dynamical balance of the ACC will be further discussed in Section 4.2. A summary of the balance of zonal momentum in the ACC is displayed in Fig. 14.
A global perspective of the zonal balance is presented in the experiments which Bryan (1997) has performed with a non-eddy-resolving OGCM. In his findings the balance between zonal wind stress and bottom form stress prevails everywhere in the world ocean and likewise, we find in all Olbers (2005). Right: vertically integrated balance of total momentum from the FRAM model in the Drake Passage belt: 1 is baroclinic form stress, 2 barotropic form stress, 3 zonal wind-stress (units Nm -2 ). From Stevens & Ivchenko (1997). experiments (run with different wind climatologies) the approximate cancellation of the barotropic and baroclinic form stresses which individually are an order of magnitude larger than the wind stress (see fig. 11 of Bryan 1997). It is noteworthy that the signature of the momentum balance found here for the ACC -with driving by the baroclinic form stress and braking by the barotropic -is only found in Bryan's results poleward of the subtropical gyres.
If, in addition to the assumptions of small R i and τ 1 , τ 2 and τ b , the flow conserves potential density then there cannot be transport across isopycnals and the meridional transport in each layer must vanish, by mass conservation. We find that the interfacial form stress is vertically constant and equal to the wind stress τ 0 and to the bottom form stress, (8) Then, in each layer, the meridional mass fluxes induced by wind stress and pressure gradients are compensated (in models with a flat bottom, the bottom form stress must be replaced by the frictional bottom stress in the above relation). This scenario of 'constant vertical momentum flux' is realized in quasigeostrophic layer models (Wolff et al. 1991, Marshall et al. 1993, Olbers 2005 which are by construction adiabatic. The real ocean is diabatic, i.e. there is mixing across isopycnals by small-scale turbulence and air-sea fluxes, but it is still in debate if it occurs predominantly between the outcropping isopycnals in the surface layer or in the interior as well (see Section 5). The meridional overturning transports at a certain latitude circle can be non-zero only if there is exchange of mass between the layers south of the respective latitude -implying conversion of watermasses south of the ACC. In fact, by mass equal the net exchange with the neighbouring layers over the area south of the respective latitude. At the same time, the overturning transports imply a Coriolis force in the individual isopycnal layers which is in balance with the vertical divergence of the interfacial form stress. The divergence of the heat flux due to transient eddies can clearly be deduced from Fig. 7 (we have shown that roughly ). Eddy effects at the respective latitude and diabatic interior effects of smallscale turbulence occurring to the south must thus adjust according to mass and momentum requirements of the zonal current and the meridional overturning. The isopycnal analysis of the zonal momentum balance in the FRAM model by Killworth & Nanneh (1994) can be taken to exemplify the importance of diabatic processes and the inapplicability of the 'constant vertical momentum flux' scenario: there is a net meridional circulation at all depths in balance with a divergent IFS and wind stress (the latter influences also deeper isopycnal layers which outcrop at some longitude along the circumpolar path).
We have so far discussed the momentum balance in a Lagrangian framework by using isopycnal layers. But many dynamical concepts and the numerical models are written in an Eulerian framework where geopotential (depth) horizons are the vertical coordinates. Interfacial form stress is then invisible, and apart from Reynolds stress terms the zonally averaged balance of zonal momentum (9) has no obvious signature of eddy effects at all. Here, τ is stress (vertical transport of zonal momentum by turbulent motions) in the interior. The sum of the bottom pressure differences is extended over all submarine ridges interrupting the integration path at depth z (continents are included). Each ridge or continent contributes to the difference between the values on the eastern side and the western side, i.e. δp b = p(x E , y, z = -h) -p(x W , y, z = -h). The curly bracket operator denotes zonal integration on level surfaces and a* = a -{a} /L is the deviation (again standing plus transient eddy component), L is the path length and the overbar is now only the time mean. From mass balance it may be shown that the vector {v} = -∂φ/∂z, {w} = ∂φ/∂y has a streamfunction φ (the meridional Eulerian streamfunction) despite possible interruptions of the zonal path by submarine topography. Equation (9) may then be integrated vertically from the surface to some depth z to yield the balance for the depth interval from the surface to the level z, where R collects the integrated Reynolds stress divergence and F is the bottom form stress cumulated at the level z from the bottom pressure term, (11) Since φ (z = -h) = 0 the balance Eq. (7) is recovered if R neglected as before, with F(z = -h) as the total form stress. However, instead of the interfacial form stress balance we are now facing in the interior a balance between the integrated Coriolis force and frictional, Reynolds and bottom pressure stresses. The balance is described in Stevens & Ivchenko (1997) for FRAM and repeated for many other eddy resolving models (see the summary in Olbers 1998). For POP we show the terms of the vertically integrated balance Eq. (10) in Fig. 15. It is obvious which terms are the main players in the different depth ranges. In the top layer these are Coriolis force and wind stress. At intermediate depths where topography is not yet intersecting there is little change in φ with depth, i.e. small meridional transport and thus small Coriolis forces balanced by small Reynolds and frictional effects 2 . And in the deep blocked layers the balance between Coriolis force and bottom form stress can be seen. The total balance of zonal momentum in this POP experiment is shown in Fig. 13.
Where are the eddy effects in this framework? They are hidden in the Eulerian Coriolis force, as will be discussed in Section 5 where we proceed with the Eulerian framework and include the still missing connection to thermohaline forcing and turbulent mixing.
Failure of Sverdrup balance
An outstanding feature of Southern Ocean dynamics is the failure of one of the cornerstones of theoretical oceanography -the Sverdrup balance βψ x = curl τ 0 . It relates the northward transport V = ψ x (the meridional velocity vertically integrated from the bottom to the surface) to the local curl of the wind stress vector τ 0 . Here, β = df/dy is the meridional gradient of the Coriolis parameter f. Closure of the circulation occurs by a western boundary current to satisfy mass conservation. Apparently, in the range of latitudes of Drake Passage the Sverdrup balance must fail: Vdx must be zero to ensure mass conservation of the piece of ocean to the south, yet the wind stress curl will not integrate to zero in general. Contrary to the circulation in an ocean basin, we cannot overcome this problem by some kind of boundary current returning the mass flux.
Nevertheless we are aware of many attempts to generate a Sverdrupian solution for the ACC. Notable is Stommel's approach (Stommel 1957) where the Antarctic Pensinsula is expanded to the north to block the Drake Passage latitude band and allow only for a northward passage. Similar barotropic theories have been presented more recently by Ishida (1994), and Hughes (2002). While we may classify these studies as theoretical test cases, the approach of Baker (1982) is more intriguing: in an attempt to estimate the ACC transport from wind data Sverdrup's balance is integrated along 55°S, i.e. just north of Drake Passage in a possibly 'Sverdrupian regime', starting at the west coast of South America and extended to the east flank of the ACC system, leaving out the part where it shoots northward after leaving Drake Passage (see Figs 1 & 2). Because of mass conservation the ACC transport running through a piece of the section that must be equal to the (negative) integrated curl of the wind stress of the remaining part -the ACC transport could then be explained entirely in terms of a certain property of the Southern Ocean wind system. In fact, for particular wind stress climatology data Baker found reasonably good agreement with the observed ACC transport. The fallacy in Baker's approach is not in the particular choice of the integration path, it is that the Sverdrup balance is not applicable to most of the Southern Ocean (and possibly most of the world ocean, see Hughes & De Cuevas 2001). It neglects the interaction of the circulation with the topography, which could be suspected to be important from the penetration of the ACC to great depth, unlike currents in basin gyres. In ocean basins the deep pressure gradients are shut off during spin-up of the circulation by westward propagating baroclinic Rossby waves of successively increasing vertical mode number (Anderson & Gill 1975). In the Southern Ocean the strong and deep reaching eastward current hinders even the fastest (first baroclinic) mode from westward propagation (see e.g. Hughes et al. 1998). The establishment of deep pressure gradients not only makes the work of the bottom pressure on topography, the BFS, effective, it also modifies the Sverdrup theory. The Sverdrup balance derives from the planetary vorticity conservation, βv = fw z + curl τ z , which states that a piece of the water column which is affected by friction (τ is the frictional stress appearing locally in the water column) or is stretched vertically must experience an appropriate advection of planetary vorticity (βv = vdf/dy). The Sverdrup balance results by vertical integration of the vorticity balance under the assumption of vanishing vertical motion at great depth so that there is no stretching of the total water column and no friction at depth. In the presence of submarine topography and deep pressure gradients this is not valid: geostrophic flow across topography induces a vertical motion, h), and stretching 4 . Hence, with the rigid lid assumption at the surface, w(z = 0) = 0, we get (12) The frictional stress τ b of the flow on the bottom is Locally the Coriolis forces are large. They generate the pressure gradients which are needed to establish the interfacial form stress. 4 We introduce here the Jacobian operator, J(a, b) = a x b y -a y b x .
generally small but the so called bottom pressure (or topographic) torque J(p b , h) (Holland 1973) can locally be very large, even overwhelming the torque by the wind stress by an order of magnitude or more. This is demonstrated in Fig. 16 showing the streamfunction and the bottom pressure torque in a simulation with the global eddy permitting OCCAM model (Ocean Circulation and Climate Advanced Modeling Project, see Coward (1996) for details). Clearly, northward excursions of the current are correlated with positive bottom torques and southward with negative, as suggested by the barotropic vorticity balance Eq. (12). We should mention that this view applies on scales of a few degrees which are clearly larger than those of individual eddies. On smaller scales the neglected advection of relative vorticity comes into play and the dominant balance is between the bottom torque and nonlinear advection terms (Wells & De Cuevas 1995). In any case the simple Sverdrup theory does not apply. Notice that the zonally averaged balance of barotropic vorticity is consistent with the balance of total momentum Eq. (7): integrating Eq. (12) around a latitude circle yields the meridional divergence of Eq. (7). Though there is local compensation of the β-term and the bottom form stress as indicated in Fig. 16, the wind curl and the bottom torque balance in the zonal mean.
We should like to point out that Eq. (12) is merely a balance that the circulation has to satisfy (possibly augmented by the so far neglected terms such as lateral friction, see below). It is not sufficient to determine the streamfunction because the bottom pressure and frictional torques are not prescribed functions like the wind stress curl but rather must be determined from a complete solution. How this can be achieved is the subject of the next section.
The geostrophic contours
In the β-term of Eq. (12) the transport V = ψ x appears which is normal to latitude circles. From a mathematical point of view, latitude circles are the characteristics of the differential equation. Some of the problems discussed above arise from the periodicity of these characteristics in the latitude belt of Drake Passage. There is another vorticity The advantage of Eq. (13) over Eq. (12) is obvious in homogeneous ocean: for constant density the JEBAR term drops from Eq. (13) whereas the latter would still contain the bottom torque of the barotropic (surface) pressure contained in p b .
With wind stress and potential energy prescribed Eq. (13) is able to predict the streamfunction if suitable boundary conditions are set. Besides conditions required by the lateral friction term (usually no-slip condition for U on the coasts) we have to satisfy mass conservation, which requires ψ = constant on coasts, with different constants on the different islands because these values determine the transports between them. One constant may be set to zero without restriction (e.g. ψ = 0 on the American continent), the other constants must be predicted, which states the need for additional equations. These follow from the requirement that a solution of Eq. (13) must allow the calculation of the pressure field p b from the momentum balance Eq. (3) (with F included; p b is calculated by path integration from one coastal point where its value may be set arbitrarily). In a multi-connected domain, with islands present, the uniqueness of p b is guaranteed by the integrability conditions (15) around each island (e.g. Antarctica) on an arbitrary path. With n islands (or continents) there are thus n -1 such conditions which render the reconstruction of p b pathindependent.
The barotropic vorticity Eq. (13), together with the constraint Eq. (15), is evidently a central tool for the determination of ocean transports. The potential energy χ has to come from the baroclinic equations of heat and salt conservation where ψ couples in via advection. Numerical ocean models using the rigid-lid approximation actually determine the depth-integrated velocity vector from a vorticity equation setup as Eqs (13) & (15).
The importance of the geostrophic contours and of JEBAR were demonstrated in a series of early numerical experiments with the GFDL model of the world ocean circulation. Bryan & Cox (1972) presented the circulation for a homogeneous fluid (constant density, thus having zero JEBAR) in an ocean with continents but of constant depth. Cox (1975) extended the studies to the cases of variable topography (with blocked f/h contours but still zero JEBAR) and also to a topographic ocean with baroclinicity, hence nonzero JEBAR. Due to limited computer resources, the last experiment was largely a diagnostic simulation, i.e. the thermohaline fields do not deviate much from the initial state taken from observations. But full prognostic experiments have since then been repeated many times (e.g. Han 1984a, 1984b, Cai & Baines 1996 with very similar results. New simulations of these cases are discussed in Olbers & Eden (2003) and depicted in Fig. 17. They were obtained with the BARBI reduced physics model which uses (13) and a balance equation for χ, (16) in which the time rate of change, the barotropic transport of χ, the vertical advection (the divergence term, see below) of a background stratification (represented by the Brunt-Vaisala frequency, ) and diffusion is included. The potential energy χ is then associated with the deviation of density from the mean density which leaves JEBAR unchanged. The term proportional to N 2 represents the generation of baroclinic potential energy by lifting or lowering the background isopycnals, i.e. it derives from in the density balance. The vertical pumping is done by a contribution from the barotropic velocity U/h and a vertical moment of the baroclinic velocity, for which a separate equation is derived from the momentum balance (for details see Olbers & Eden 2003). In essence the coupled system Eqs (13) & (16) is representing a wave system consisting of planetary-topographic Rossby waves forced by wind stress.
The flat bottom, homogeneous ocean has an ACC transport of a couple of hundreds of Sverdrups (more than 600 Sv in Bryan & Cox 1972, 700 Sv in BARBI). The homogeneous ocean with topography has very low ACC transport (22 Sv in Cox 1975, 35 Sv in BARBI), and the third experiment, now considering baroclinic conditions in a topographic ocean, generally gets a realistic transport for the ACC (187 Sv in Cox 1975, 130 Sv in BARBI). How can we explain this behaviour?
The flat bottom case has an almost zonal ACC driven by the zonal wind. Since bottom form stress and bottom torque a. b. c. d.
cannot operate, friction is the only momentum sink and with the diffusive parametrization of lateral eddy induced transports of momentum by a diffusivity A h , as in Eq. (14), the zonal transport is proportional to Y 3 τ 0 /A h where Y is the width of the current. We are facing 'Hidaka's dilemma' (Hidaka & Tsuchiya 1953, see Wolff et al. 1991: either we implement a reasonably sized diffusivity and then get an unrealistically large transport or we must use an unrealistically large eddy viscosity to get a reasonable size of the ACC transport. The topographic homogeneous case has a transport that is far too low. The system now establishes a bottom form stress and a bottom pressure torque (but not JEBAR) from the surface pressure being out-of-phase with the submarine barriers of the flow (see Fig. 14). The current is mostly along the geostrophic contours which are north of the Drake Passage belt. Apparently, the frictional torques in Eq. (13) are too weak to push mass across these contours in a significant amount.
There is an interesting lesson to learn from the momentum balance Eq. (3): the component oriented along f/h = const is given by (17) where s is the path length coordinate along geostrophic contours and × is the tangential unit vector. If the contours are blocked by continents (as in the Pacific sector, see Figs 6 & 18) the transport between them is ∆ψ • (h/f)∆p b + V Ek where V Ek is the Ekman transport across the contour and ∆p b the pressure difference between the continents (the other friction terms are small in this regime). We may interpret this latter term as a net geostrophic transport sustained by the pressures on the coasts. On the other hand, with small or zero ∆ψ we see that the net wind stress along f/h-contours is taken up by a pressure difference on the continents -just as in the Sverdrup circulation regime in a flat bottom basin bounded by continents. For the f/h-contours closed on the rim of Antarctica we have a Hidaka-type transport regime where wind stress is balanced by friction. The transport, however, is small because the wind is much weaker (and actually westward, see Fig. 19) and the width of this region small. The closed f/h-regimes on the Mid-Atlantic Ridge and around Kerguelen are governed by friction as well as balancing a small net Ekman transport into these regions.
The final case which considers topography and baroclinicity gets a reasonably sized ACC transport for which clearly JEBAR is responsible. It is an order of magnitude larger than the wind curl but that property alone would not explain why this new forcing should not be blocked by the f/h-contours as the wind curl is blocked. In fact, the JEBAR field has a very particular spatial structure: highs and lows are placed right along the undulating path of the geostrophic contours (see Fig. 18) to help the current to circumvent the f/h-constraint. We may state this property in Fig. 17 c & d it appears that ψ and χ are highly correlated, suggesting a functional relation χ = C(ψ) established by the dynamics. This relation is plotted in the last panel of Fig. 18. A reasonable fit is suggested from Eq. (4), thus roughly χ = f 0 ψ + const, and this casts the vorticity balance (13) into (18) The topographic β-term and JEBAR in Eq. (13) combine to achieve new (unblocked) characteristics which actually are those of the flat bottom problem.
The dependence of transport on forcing
What are the mechanisms and forcing functions that determine the transport of the zonal flow? The considerations of momentum and vorticity balances and the associated fluxes through the circulation system, outlined in the previous sections, do not answer this question. Indeed, one cannot expect a prediction of the ACC transport from just one or two integral balances. They indicate, however, that wind forcing and vertical momentum flux by transient and standing eddies and waves are important but also the processes which set up the surface and interior pressure field, which is not really helpful because it covers nearly all possible mechanisms. The baroclinic pressure aspect brings the local surface fluxes of heat and freshwater into focus (see Fig. 19). Combined as the surface density flux they determine the density field in concert with advection and diapycnal mixing -and then it might correctly be suspected that side issues such as mixing by small-scale turbulence and remotely forced agents, such as the import of NADW into the Southern Ocean, could have an influence on the ACC transport. A complete theory capable of predicting the absolute transport of the ACC is thus a formidable challenge. For quantitative answers a full model including external forces by the wind stress and the surface fluxes of density (or buoyancy) as well as the advection of mass (volume) and density must be solved, which points towards studies with numerical OGCMs. Though quite a suite of carefully designed numerical experiments exists (some have been mentioned in the previous sections and more will be discussed below) their contribution towards an understanding of the shaping of ACC transport is limited. Many of the studies have the flavour of an engineering task: changing parameters and/or forcing and monitoring the results. These provide qualitative answers but a deeper insight into the dynamics of the ACC can be obtained by cheaper methods which may reveal mechanisms in trade for completeness.
We have mentioned above the Hidaka regime, describing an entirely frictional zonal current in a flat bottom ocean with a simple transport formula. Another simple concept follows from the momentum balance in adiabatic conditions which results in Eq. (8). Instead of emphasizing the lateral eddy flux of momentum as in the previous flat bottom model it is based on the 'constant vertical flux scenario' (appropriate to an eddy-active ocean with no diapycnal mixing). Replacing the IFS by the lateral eddy heat or density flux as indicated in Section 3.2 we get the Johnson-Bryden relation (Johnson & Bryden 1989), here written for density and transient eddies (denoted by a dash). The standing eddy component is neglected (which is a severe assumption because it exceeds the transient component in realistic conditions) or -if the mean is interpreted as average along the current path -τ 0 is not the zonal wind stress but rather the path following component (so we use dashes in the following arguments). According to the formula the 'northward' eddy density flux in the circumpolar belt of the ACC, suitably normalized, is of the size of the 'zonal' wind stress τ 0 (so there would no meridional overturning circulation, which may rightly be questioned). In a first step Johnson & Bryden parameterize the transient lateral eddy flux by a down-gradient form, , and find that the wind stress and the eddy diffusivity constrain the slope of the isopycnals, . Such a relation is roughly consistent with the observed slopes in the ACC belt 4 if the eddy diffusivity is of order K ~ 10 3 m 2 s -1 (take s = 10 -3 ,τ 0 = 10 -4 m 2 s -2 ). Johnson & Bryden (1989) proceed replacing the lateral density gradient using the thermal wind relation, , and find (19) where the vertical density gradient is replaced by the squared Brunt-Vaisala frequency . Apparently, K(f/N) 2 defines an equivalent diffusivity for the vertical momentum transfer which is achieved by lateral density diffusion (see e.g. Rhines & Young 1982, Olbers et al. 1985. We see here the same equivalence between vertical momentum transfer and horizontal heat transfer by eddies as in Section 3.2. In a second step Johnson & Bryden used Green's form (Green 1970, Stone 1972) of the diffusivity K = αR 2 /T. It is obtained for a baroclinically unstable zonal current, where R is a measure of the eddy transfer scale, and T = N/|fu z | is the Eady time of growth of the unstable eddies (Eady 1949). The constant α measures the level of correlation between v' and ρ' in the density flux (α = 0.015 ± 0.005 according to Visbeck et al. 1997). Johnson & Bryden's result for the ACC transport is obtained by relating the turbulence scale R to the baroclinic Rossby radius λ = Nh/(|f|π). For R = π 2 λ we get their estimate of the shear (20) which yields by integration the transport relative to the bottom. The shear and thus also the transport is proportional to the square root of the wind stress. With some reasonable values of parameters a transport of about hundred Sv relative to the bottom can be obtained.
The Johnson-Bryden model has been much discussed as a theory of the ACC transport. Attempts to verify the squareroot relation with numerical models are plentiful (e.g. Gnanadesikan & Hallberg 2000 with coarse-resolution models with simple geometry, Gent et al. 2001 for coarseresolution global models, Tansley & Marshall 2001 for twolayer channel models) but generally without success. This is not surprising in view of the many assumptions put together in this model. First, there is the assumption of the adiabatic state of the flow (zero diapycnal mixing) which is violated in the real ocean but also in numerical models operating on z-levels (isopycnic models may be adjusted close to an adiabatic state). Second, it is not clear whether the Eady model and other details of baroclinic instability theory are appropriate in the ACC eddy field. Certainly some of these features are generated by baroclinic instability but they are not in the initial growth state but rather in some state of equilibration. Furthermore, the above parameterizations are not implemented in most coarse OGCMs. Even eddy fluxes deduced from eddy resolving models show a quite poor agreement with eddy flux parameterizations (Bryan et al. 1999 andIvchenko 2001 for the POP model). Finally, we might also expect that the stratification, entering in the Johnson-Bryden concept only in a prescribed N(z), would at least partly be set by the action of the wind and the overturning circulation, to point here again at the missing thermohaline forcing. This is convincingly demonstrated by Gnanadesikan & Hallberg (2000) with simple models in which the buoyancy forcing has a direct feedback on the density structure (the tilt of the interface in a two-layer ocean). The wind stress and buoyancy feedback interplay in a complex way via the balance of northward Ekman transport and the upwelling through the thermocline to produce a meridional pressure gradient across the unblocked latitudes which is in balance with the baroclinic part of the ACC transport. The net transport is then clearly not uniquely determined by the wind stress.
The above concepts miss the influence of topography. The total momentum balance Eq. (7) contains the part of the bottom pressure which is out of phase with variations of the topography along the zonal path of integration. Some insight into the mechanism which the flow uses to generate bottom form stress has been gained from heavily truncated images of the full dynamics, so called low-order models where the flow fields are represented by very few spectral components (hopefully those of dynamical relevance, see Olbers 2001). In the barotropic Charney-DeVore model (Charney & DeVore 1979, see also Olbers & Völker 1996, Völker 1999) the topography is taken sinusoidal in the zonal direction: if topography is sine, form stress is cosine. The model resolves the zonal current u and the sine and cosine In detail there are substantial and dynamically relevant deviations from this adiabatic model, which we reconsider in Section 5.
components of pressure. The latter are established by a standing barotropic Rossby wave which is generated by the mean flow u going over the topography. At the upstream side of the hills the fluid must be lifted up thus making high pressure, at the downstream side a pressure low follows. The naturally westward propagating wave becomes stationary by eastward advection in the zonal current and friction: it is locked in resonance with the mean flow and produces a form stress which becomes a nonlinear functional of the zonal velocity, (21) Here k is the zonal wavenumber of the topography, δ the ratio of height of the hills above the ocean floor to the mean depth, c R = β/k 2 the speed of barotropic Rossby waves and ε a parameter of linear bottom friction, τ b = -εuh. The total momentum balance (7), written now as τ 0 -ε uh + BFS[u] = 0, then determines the zonal transport uh. Three equilibria are found if τ b /(hε) is well above c R , two are stable circulation regimes. For the two solutions in the resonant range (u close to c R ) the friction in the momentum balance is negligible, these solutions are balanced by form stress. The off-resonant solution is controlled by friction. It is remarkable that friction is essential in all cases to shift the pressure field out-of-phase with respect to the topography. Charney & DeVore (1979) have developed this model for atmospheric flow regimes. In the ocean the resonant solutions do not exist -flow speeds are much less than speeds of barotropic Rossby waves -and reasonable values for the wind stress and the bottom friction allow only for the frictionally controlled solution (22) where a = |f/β| is the earth radius times tangent of latitude. The transport in this barotropic model decays away from the frictional solution uh = τ 0 /ε (with hundreds Sv transport) with increasing height of the topography. The drag of the form stress increases quadratically with the height of the topography and attains higher values than friction for moderately sized submarine ridges. In a baroclinic extension the Charney-DeVore resonance can operate in realistic ACC conditions. This will be discussed in Section 4.2.
We should mention that the above model works on an infinite β-plane but also in a zonal channel -a set-up which might be more appropriate for the ACC. However, here the f/h contours become blocked at some critical topography height (δ > δ c = 2Y/(π|a|), Y = channel width) and then the flow regime is not well represented by a few low-order modes (see Olbers et al. 1992). In fact, analytical and numerical solutions (Krupitsky & Cane 1994, Wang & Huang 1995 regime which is entirely unrealistic for application to the ACC: the flow is in narrow frictional boundary layer currents at the walls, switching side from south to north in a narrow internal layer along a connecting f/h-contour. In this blocked regime the flow is weak due to substantial drag by form stress and the transport is independent of friction.
In the rest of this section we extend the Johnson-Bryden concept to include the missing thermohaline forcing, and present a linear wave barotropic-baroclinic theory of the establishment of the bottom form stress.
Extended Johnson-Bryden type models
We proceed with the zonally averaged Eulerian model of Section 3.3 to extend the Johnson-Bryden concept towards the missing thermohaline component. The vertical eddyinduced flux of momentum enters via the TEM theory (Transformed Eulerian Mean, see Andrew et al. 1987, McIntosh & McDougall 1996 in which Eq. (10) is augmented by a correspondingly averaged balance of potential density ρ. Assuming stationary conditions we start with the density balance (ρu) x + L · ρv = -I x -L · J where (u, v) is the three-dimensional velocity, (I, J) is the small-scale turbulent flux of ρ, and L the (y, z) derivative. We separate density and velocity into a zonal mean part and deviation, e.g. ρ = {ρ} /L + ρ*. As in Section 3.3 the curly bracket operator denotes zonal integration on level surfaces and L is the path length. The balance of mean density B = {ρ} /L is then obtained by zonal integration of the density balance and expressed by (23) where φ is the Eulerian overturning streamfunction, {v} = -∂φ/∂z, {w} = ∂φ/∂y. The eddy density flux is treated as follows: the flux vector is split in the components oriented at the isopycnal, ({v*ρ*}, {w*ρ*}) = -φ ed (-B z , B y ) -K dia (B y , B z ), which introduces a diapycnal eddy-induced diffusivity 5 K dia and an eddy-induced streamfunction φ ed , given by (24) This allows Eq. (23) to be rewritten as The mean density is advected by a combination of the Eulerian current and the eddies with φ res = φ + φ ed which is called residual streamfunction. No approximation has been made yet, Eqs (23) & (25) are identical. According to the common belief, however, the diapycnal flux of density by eddies is small and so we neglect the first term on the rhs of Eq. (25). Eliminating then the Eulerian streamfunction from the momentum balance we get the TEM model The close correspondence of this momentum balance and the isopycnal form (6) becomes obvious when we use K dia ≡ 0 to write the eddy streamfunction as φ ed = -{v*ρ*} /B z which is the 'heat flux equivalent' of the interfacial form stress, as outlined in Section 3.3. Consequently we may view Eq. (26) as extension of the . With the residual streamfunction it includes the thermohaline part, it also includes Reynolds and bottom form stresses. We shall use the model Eq. (26) below in Section 5 to deduce the density field and the residual streamfunction (the overturning circulation) from the forcing of the system by wind and surface buoyancy flux.
Here we attempt to infer from Eq. (26) the magnitude of the zonal transport. We neglect subgrid and Reynolds stresses (which are small) and the standing eddy term (which is small if the mean is ACC path following, as discussed before) and use a downgradient parametrization of the transient eddy flux, {v'ρ'} = -KLB y , by which φ ed = -LKB y /B z = LKs below the mixed layer. The residual streamfunction is inferred from (26), and φ res , taken just below the surface mixed layer, relates to the surface density flux , 0 by φ res B y = , 0 . The two balances in (26) then lead to the relation (27) just below the mixed layer base. A similar relation is used by Speer et al. (2000) to examine transformation of watermasses around Antarctica. Assuming that mixing by turbulence is small in the interior the slope at the mixed layer base is related by Eq. (26) to that at greater depth and Ks -τ 0 /f = const on an isopycnal (see also equation Eq. (43) below). Thus, the relation Eq. (27) holds in the interior as well but the terms on the lhs are taken at the respective latitude and depth and the rhs at the corresponding isopycnal outcrop to the south. If the meridional gradient of density in the surface layer is known the complete interior density field can in fact be determined from τ 0 and , 0 . Marshall & Radko (2003) use Eq. (27) in this 'diagnostic' mode (B y at the surface is given from observations) to infer the structure of the overturning circulation in the Southern Ocean.
Another diagnostic form of Eq. (27) is achieved by the assumption that the vertical gradient B z , or the Brunt-Vaisala frequency N 2 = -gB z , is known from observations. Replacing then the meridional gradient by the vertical current shear, fu z = gB y , we recover the Johnson-Bryden model in an extended form, . Implementing the Green-Stone parametrization into the first case yields the above discussed square-root dependence of transport on the wind stress, and the second case leads to a cubed-root dependence of transport on the buoyancy forcing -g, 0 .
We would like to clarify that the above analysis is not a complete transport theory. Neither B y in the surface layer nor the profile of N 2 can be regarded as universally given parameters, as they clearly will depend on the forcing τ 0 and , 0 . Equation (27), though valid over a large depth range if internal turbulence can be neglected, is not sufficient to completely determine the density field or the current profile from the forcing functions and universal parameters. An additional relation is needed, for example an equation which determines the meridional profile of the slope s or of the residual streamfunction φ res beneath the mixed layer from the forcing. This means that somewhere in the overturning circulation diffusion and mixing must come into playmathematically speaking a non-local problem has to be solved (the complete problem is elliptical, see Section 5). The shortcut described by and where φ res is simply set equal to the Eulerian transport τ 0 /f (or a specified fraction of it) lacks physical grounds. A similar restriction is found in Bryden & Cunningham (2003): in their considerations the residual transport is entirely ignored.
The shaping of bottom form stress
With the presentation of the barotropic Charney-DeVore model we have highlighted the role of long Rossby waves (with wavelength of the underlying topography) in shaping the bottom form stress. We have pointed out that the resonant behavior in the model cannot occur in a barotropic ocean because barotropic waves are too fast. But we can have such a mechanism in a baroclinic set-up: baroclinic oceanic Rossby waves have a speed comparable to zonal velocities in the ACC. In a baroclinic version of the Charney-DeVore model with ACC conditions Olbers & Völker (1996) and Völker (1999) show that baroclinic waves are generated in resonance with the topography and become stationary when the barotropic current speed equals the baroclinic Rossby wave speed. The transport decreases strongly with increasing topography height, starting from a frictionally controlled state at low heights, followed by a transition to a complex resonant regime with multiple equilibria at intermediate heights, and further to a state controlled by barotropic and baroclinic bottom form stress at high topography. Within the limits of such a simple model this latter regime would be appropriate to the ACC. Though the momentum balance Eq. (7) seems to operate here without friction, it is important that the phase shifts of the topographically induced pressure gradients with respect to the topographic undulations are proportional to the coefficients of bottom and interfacial friction of the model, in close correspondence to the barotropic model as given by Eqs (21) & (22). The baroclinic topographic resonance theory determines the transport in adiabatic models in a manner similar to the barotropic Charney-DeVore mechanism: the bottom form stress is a complicated resonance function of the barotropic and baroclinic velocities and the transport follows from Eq. (7) and a corresponding balance for the baroclinic momentum. The structural properties of this low-order model are preserved when the degrees of freedom are increased from the simplest nontrivial model with 11 modes to a number representing a moderately resolved coarse model (with 75 modes). We return to such a low-order model in more detail below. There have been numerous numerical studies with a realistic ACC configuration with coarse resolution models (Olbers & Wübber 1991, Cai & Baines 1996, Gnanadesikan & Hallberg 2000, Gent et al. 2001, Borowski 2003 which investigated the dependence of transport on the buoyancy forcing at the surface, however, without revealing a clear concept of how the transport depends on the forcing functions. Distinguishing between the pressure forces generated by the topographic resonance mechanism or by a thermohaline forcing with zonal variations is non-trivial. It must be kept in mind that a forcing by a restoring term, say γ(T obs (y) -T) for temperature, will always give rise to a nonzonal surface flux if the flow and thus T are non-zonal. Borowski (2003) presents a large number of channel experiments with a numerical primitive equation model (MOM with 1° meridional x 2° zonal grid and 16 levels) studying the sensitivity to forcing and frictional parameters. The forcing fields (wind stress and T obs ) are strictly zonal but restoring is used for temperature. Figure 20 displays the streamfunction, potential energy and a zonal section of temperature through the middle of the channel for two experiments which differ only in the restoring surface temperature (10°C gradient across the channel for experiments SIN1, 20°C for SIN2). However, the momentum balance is drastically different (see Fig. 21): in SIN1 the baroclinic form stress drives the eastward flow (clearly visible in Fig. 20: as in Fig. 11 the water on the western slope is lighter compared to the eastern slope), in SIN2 it decelerates (here the lighter water is on the eastern slope), in both cases the baroclinic form stress is in opposition to the barotropic form stress. It seems that the stronger gradient of surface restoring temperature puts the system into a different regime by implementing a stronger non-zonal thermal forcing.
Analytical theories of the ACC which include baroclinicity are mostly done in the quasi-geostrophic framework and use a modal truncation: the layer streamfunctions are represented by a set of structure functions for the (x, y)-dependence so that the dynamical equations are written only as ordinary differential equations for the time dependence of amplitudes. If the system is truncated to a few modes, a so called 'low-order model' is obtained and steady states can be investigated analytically using the mathematics of dynamical system theory or by use of numerical bifurcation tools (Dijkstra 2000). The Charney-DeVore-type models discussed above are of such kind. Here, we extend the model of Völker (1999) by adding thermohaline forcing and implementing lateral diffusion of momentum (more details are given in Borowski 2003). A two-layer zonal channel with quasigeostrophic dynamics is considered, with a sine-shaped topography in the zonal direction. The two streamfunctions are expressed by suitably chosen sinusoidal structure functions. After some eliminations the resulting system is written in terms of six amplitudes representing the barotropic transport T (upper layer plus lower layer transport), the baroclinic transport S (upper layer minus lower layer transport), and respective barotropic and baroclinic sine and cosine components, T s , S s and T c , S c (more accurately, the T and S quantities are based on barotropic and baroclinic velocities). The latter generate the barotropic and baroclinic form stress, respectively. It is clear that with such an enormous reduction, transient eddies are not present. Their effect on the downward transfer of momentum -the interfacial form stress IFS -is parameterized by friction acting on the interface of the two layers with a coefficient κ. The physical mechanism of this friction is completely equivalent to a diffusion of layer thickness (see Gent & McWilliams 1990, Gent et al. 1995. In addition we have lateral diffusion of momentum with a viscosity ε. We have scaled these variables 7 and the dimensionless form of the set of differential equations becomes (34) The forcing appears in W 0 (wind stress) and Q 0 (external buoyancy flux). Terms derived from nonlinear advection are found in the cornered brackets (R,m,n are numerical coupling coefficients depending only on the channel dimensions, see footnote). The respective term in the zonal baroclinic balance Eq. (30) is the interfacial form stress (IFS) induced by the standing eddies. The κ-term in that equation is the corresponding transient eddy interfacial form stress. The baroclinicity enters via the internal Rossby radius λ, and the topography height is δ. Lateral friction operates in the barotropic T-equations and interfacial friction in addition in the baroclinic S-equations. The W 0term in the baroclinic Eq. (30) arises from the Ekman pumping acting on the background stratification. The sine and cosine Eqs (31) to (34) describe planetary-topographic Rossby waves with the same zonal wave number as the topography; β is a scaled planetary coefficient df/dy. There are terms arising from nonlinearities (advection) and diffusion. Note that Eq. (29) is the balance of vertically integrated momentum, and is congruent to Eqs (7) [ ] We can identify the barotropic and baroclinic contributions to the bottom form stress.
Considering the steady state of the above model we recover the most important physical mechanisms which we have outlined in this article.
The barotropic form stress drag: a barotropic state is obtained for λ = Q 0 = 0. Then all S-fields are identically zero and a barotropic solution as described by Eqs (21) The compensation of barotropic and baroclinic form stresses: for strong stratification (large λ) or vanishing nonlinearities (cornered bracket terms neglected, i.e. no advection) we learn from the zonal mean baroclinic balance Eq. (30) that if the form stress terms δT c and δhS c are individually increasing with topography height they must compensate since the remaining terms in the balance do not increase. The reason for the compensation of the barotropic and baroclinic form stresses, discussed in detail in Section 3.3, is thus found in the compensation of the vertical lifting or lowering of the background stratification (given by λ 2 ) by the pumping induced by the Ekman velocity and the barotropic and baroclinic vertical velocities of the flow (the first and second terms on the rhs of Eq. (30)), the latter two being generated by the flow passing across the topography (they are proportional to δ). It is not clear yet, however, which of the two form stresses drives and which brakes the zonal flow and under what circumstance they would increase with height of the barriers.
Breaking of 'Hidaka's dilemma' by OLBERS et al. 7 The scaling and coefficients are as follows: All transport variables are scaled by π 2 /(|f 0 |Y 2 ) to yield dimensionless T, S, T c ,…. Parameters are b = Y/(πL), ε = π 2 A h /(b|f 0 |Y 2 ),κ = π 2 K/(b|f 0 |Y 2 ), λ = πR/Y, β = 2Ydf/dy/|f 0 | where L is the zonal length and Y the width of the channel. R = √(g' / f 0 2 ) H 1 H 2 /H is the internal Rossby radius of a 2-layer fluid with densities ρ 1 , ρ 2 , mean layer thicknesses H 1 ,H 2 ,H = H 1 + H 2 and reduced gravity g' = g (1 -ρ 1 /ρ 2 ), furthermore h = H 1 /H is a thickness ratio. Coupling coefficients are R = 3π 2 b 2 /8, m = 64π 2 /3, n = 16/3. The scaled depth is 1 + (δ/π) sin(2πx/L), thus δ/π is the relative height of the topography. The forcing amplitudes are W 0 = (3/16)π 2 τ 0 /(bHYf 0 2 ), Q 0 = (3/32)π 3 B 0 /(b|f 0 | 3 Y 2 ) in terms of the zonal wind stress τ = τ 0 sin 2 (πy/Y) and the surface buoyancy flux B = B 0 cos(πy/Y) sin(πy/Y). Here,τ and B are dimensioned m 2 s -2 and m 2 s -3 , respectively.) Rossby radius is large (strong stratification). We then arrive at the physically intuitive statement that the transport in the surface layer, T + (1 -h)S, is decoupled from the topography: it is given by the wind stress and buoyancy flux acting in the above combination against lateral friction. The surface layer transport is then in a 'Hidaka'-type state (inversely proportional to the eddy viscosity ε and thus large for reasonably sized ε). The transport of the lower layer, T -hS, is governed by the magnitude of the bottom form stress and external heating Q 0 . This Hidaka dilemma must be resolved by weak stratification and large IFS from transient or/and standing eddies. Notice that the crucial term [T c S s -T s S c ] of the standing eddies is nullified for the flat bottom wave solution of the Eqs (31) to (34). It is, however, supported by linear topographic waves. The wave equations yield after some manipulations (38) The standing wave IFS is -in this approximation -negative if the topography is undulated and the bottom flow nonvanishing, it must transfer eastward momentum downward. It is worth noticing that it depends marginally on the viscosity ε but is entirely supported by the interfacial friction κ.
The form stress terms δT c and δhS c arise by the response of the barotropic and baroclinic wave system, described by the four Eqs (31) to (34), to the zonal flow crossing the topography (there is no direct forcing because the external heating function was assumed strictly zonal). A reasonable solution of the model should yield transports in the range β » T, S / βλ 2 , meaning that the flow velocities are much less than the speeds of barotropic Rossby waves but supercritical with respect to the baroclinic waves, as in ACC conditions. Then the nonlinearities in Eqs (31) & (33) are small and these equations represent a linear barotropic planetary-topographic wave. The barotropic form stress δT c can thus be explained by long linear Rossby waves generated by the deep current, T -hS in the above model, crossing the large ridges blocking the circumpolar path of the ACC. In contrast, the baroclinic form stress δhS c might be governed by a nonlinear response, according to T, S / (32) and (34). We discuss the consequences further below.
Still it is worth considering the linearized version of the above model.
A linear transport model
We thus neglect the standing eddy IFS in Eq. (30) and the advection terms in the wave equations. The form stress terms become (39) with . The barotropic form stress is thus supported by lateral friction, the baroclinic by interfacial friction and stratification. However, both extract eastward momentum from the flow if the deep transport is eastward: the linear wave model thus does not imply the afore mentioned form stress compensation effect. The total transport T and the shear transport S are readily evaluated as lengthy expressions of δ, ε, κ and λ, regained. The response functions A, B, C and D are displayed in Fig. 22 as height dependence for typical parameters. All functions flatten out to a plateau at large topography heights but have different critical heights above which this happens. The critical height of the barotropic response is clearly δ~ β (see also the Charney-DeVore model Eq. (22)). The baroclinic response scale depends very much on the sizes of ε, κ and λ. The figure also elucidates the shares of the different transport sources for typical W 0 and Q 0 in the total and the shear transports. It becomes clear that only below quite moderate heights of the topography we find the direct wind effect dominating. At larger heights and for the present parameters the Ekman pumping acting on the stratification is the most important driving agent of transport. Only if becomes of the order of W 0 the buoyancy forcing is getting effective for the transport.
Finally we consider a nonlinear extension of the above model. We want to improve on the baroclinic bottom form stress and thus still neglect the standing eddy IFS. Including now all terms in the baroclinic wave Eqs (32) & (34) the baroclinic bottom form stress becomes 2 0 / λ Q h (42) and it is evident that supercritical conditions, T > βλ 2 , can indeed lead to a negative baroclinic bottom form stress which then drives the current eastward (if the bottom flow T -hS is eastward and the last term on the rhs is smallactually requiring a large viscosity ε as indicated by the numerical experiments in Fig. 21). We conclude that the supercriticality of the ACC with respect to baroclinic Rossby waves is essential for the system to achieve the observed balance of zonal momentum in which wind and baroclinic bottom form stress drive and barotropic form stress decelerates the current.
The meridional overturning
Much of the recent perception of the ACC circulation is centered on the meridional overturning and ventilation of water masses in the Southern Ocean. The classical view (Sverdrup et al. 1942) of water mass storage and spreading is sketched in Fig. 4 (Gordon 1999) and repeated in Fig. 12 where the role of eddies in the unblocked part of the watercolumn is highlighted. We have pointed out at various places in this paper that eddies and turbulent mixing might accomplish a major task in shaping and balancing the overturning circulation and it remains to put up a simple strawman model to demonstrate how it might work.
We follow the concept presented in Olbers & Visbeck (2004) which extends the work of Marshall & Radko (2003) to a predictive model of the overturning. We assume that all mixing and watermass formation processes take place in an upper layer of the ocean -basically a turbulent layer where Fig. 23. The forcing data obtained by an ACC-path following average from the NCEP analysis (wind stress: full, units 10 -4 m 2 s -2 , density flux: dashed, units 10 -6 kgm -2 s -1 ). From Olbers & Visbeck (2004). Ekman transport and pumping is established by the wind and buoyancy is imprinted on the surface waters by heat and freshwater flux from the overlying atmosphere -while the ocean interior is void of turbulence but eddies are present that transport and mix substances along isopycnals. We refer to this concept as 'adiabatic eddy regime' and see it as an extreme scenario. The real ocean might have substantial mixing by turbulence in the interior as well (see Heywood et al. 2002, Naveira Garabato et al. 2004). Eddies might contribute by a diapycnal flux to watermass formation, too. However, the above simplified view allows for an analytical treatment. We choose set-up where the equations are averaged along a mean ACC path. We thus neglect the standing eddy component and use the TEM model derived in Section 4.1 with a downgradient parameterization of the meridional density flux by transient eddies, {v'ρ'} = -KLB y . This casts the interior eddy streamfunction into φ ed = Ks where s = -B y /B z is the isopycnal slope. We neglect the eddy Reynolds stress in (26) and assume that the frictional stress τ acts as a body force which is nonzero only in the surface Ekman layer. Since wind stress and bottom form stress must balance, as discussed in Section 3.3, the amplitude of the form stress is that of the wind stress. Consequently, the Eulerian streamfunction is set by the wind stress and, abbreviating the Ekman transport by M = -{τ o }/(Lf), we finally get φ = MT(z) and φ res = MT(z) + Ks where T(z) is a structure function as sketched in Fig. 24. With no turbulence in the interior we have J ≡ 0 and the density budget of (26) becomes J(φ res , B) = 0, saying that residual streamlines and isopycnals coincide. This can be written as a differential equation for the slope, The isopycnals are the characteristics of this equation. Initial data of s are required on some non-isopycnal curve, in our case this will be the depth level z = -a below which turbulence stops acting. Because s is infinite in the mixed layer where B = B d (y) we assume between the mixed layer ( ) base z = -d and the depth z = -a an intermediate layer with finite slopes. As in the mixed layer we implement a prescribed structure of the density field in the 'slope layer' The slope at z = -a is then s a = (a -d)( ∂B a /∂y)/(B a -B d ).
It remains thus to determine the upper layer densities B d (y) and B a (y). In this depth range the complete nonadiabatic density balance Eq. (23) must be applied. We insert the density structure (mixed layer and slope layer) and deduce a coupled advective-diffusive set of balance equations, density is forced by the surface density flux , 0 from exchange by heat and freshwater with the atmosphere (we use the forcing data shown in Fig. 23). There is meridional advection by the Eulerian Ekman currents and by eddies, there is vertical pumping by these agents (eddy pumping would appear if the advection terms are split into complete divergences and vertical advection), with mixing by turbulence to sustain the vertically mixed state B d (y), and entrainment of density at the mixed layer base, parameterized by a mixing coefficient α d . Corresponding terms arise in the slope layer balance. The density that is entrained into the slope layer from below is the density at depth ranges with inward flow at the northern boundary of the model domain, e.g. NADW at 40°S. This water proceeds up the isopycnals to the slope layer base without changing its density. The coupling to the interior ocean occurs via the predicted slopes at the base of the slope layer.
The performance of the model is exemplified in Figs 25 & 26, using reasonable parameter values (see figure caption). It should, however, be borne in mind that the solution structure might change substantially if another choice is made, in particular in the interior where the slopes are inversely proportional to the value of the eddy diffusivity K and changes at some level influence via the characteristics the entire deeper structure. For the simulation shown in Fig. 26 we have used the most simple form oriented at the diffusivity estimates discussed in Olbers & Visbeck (2004): The east-west section displays the isopycnal and sea surface tilts in relation to submarine ridges, which are necessary to support the bottom form stress signatures discussed in the text. The curly arrows at the surface indicate the buoyancy flux, the arrows attached to the isopycnals represent turbulent mixing. Sinking of Antarctic Intermediate Water (AAIW) is not shown (see Fig. 24). Redrawn using a figure from Speer et al. (2000).
K(y, z) is vertically constant and a linear function of y with significant increase (by a factor of 5) towards the north. This particular solution yields an up-down-up-down pattern of pumping (by eddies and Ekman) at the slope layer base (see the solid curve in the lower right panel of Fig. 25) which in this structure may be associated with upwelling NADW, downwelling AAIW and upwelling Subantarctic mode water (the latitudes and densities are roughly consistent with this interpretation). Propagation of the upper layer solution via the characteristic Eq. (43) into the interior yields the associated isopycnals (and residual streamfunction) which also coincide well with the observed interior density structure (see Fig. 24 and the left panel of Fig. 26). We would like to point out that the eddy field has a dominant share in this simulation: without eddies the streamfunction and densities would mirror the Deacon cell depicted in Fig. 24. This is also evident from the streamfunctions and associated pumping velocities of the solution shown in the lower panels of Fig. 25.
Conclusions
There are five threads running through this review of the ACC system: submarine topography, standing and transient eddies, long barotropic and baroclinic Rossby waves, turbulent mixing and surface flux of momentum and buoyancy. They are woven into the physics governing this extraordinary current, attempting to contribute to an answer of the most interesting and most urgent questions about the ACC: what is the balance of the zonal momentum; what mechanism and forcing functions determine the transport; how do watermasses and the substances they carry penetrate the strong and deep-reaching zonal flow? We base our discussion on the research on these topics which has accumulated in the last decade and partly answer these questions. Our tools are observations, theory and models coming from a variety of instrumental techniques, field expeditions and modelling concepts. Fig. 27 gives a sketch of the physics in the ACC system showing its most important ingredients. The flow achieves a balance of its zonal momentum in which the input of eastward momentum from the wind stress acting all the way around Antarctica is transferred by standing and transient eddies through the water column to the bottom where the bottom pressure field adjusts such that bottom form stress acts as a sink. Clearly, if adjustment is not present, the imbalance will accelerate the current, inducing changes of the pressure field until a balance has been reached. Separating this form stress into the one arising from the surface pressure and the one due to the internal mass stratification it is seen that these individual components overwhelm the wind stress by an order of magnitude, the barotropic form stress is retarding the eastward current and the baroclinic is accelerating it (see east-west interface of Fig. 27). Both, however, must compensate to a high degree, a constraint that arises from the balance of the zonal mean isopycnal thickness structure where vertical pumping by the Ekman, barotropic and baroclinic velocities of the current acting on the mean stratification are dominating and must cancel each other. The large scale vertical flow comes about by the zonal flow passing across the large-scale topographic barriers, mainly the midocean ridges along the path of the ACC in the Southern Ocean. We have found evidence that the driving of the eastward current by the baroclinic bottom form stress needs a supercriticallity of the flow with respect to the associated long planetary Rossby waves. They must be advected eastward by the current to become locked with the required phase shift to the zonal topographic undulations.
The downward transport of zonal momentum is established by the mechanism of interfacial form stress by which pressure forces act across the isopycnal interfaces which are tilted by the action of transient and standing eddies. The form stress is associated with a meridional eddy density (or heat) flux, and a downward flux of eastward momentum supports a poleward heat flux, in agreement with observations and eddy-resolving models. The vertical divergence of interfacial form stress drives the meridional overturning circulation. In isopycnal ranges which are zonally unblocked by submarine topography the divergence of the interfacial form stress is the dominant driving force of the meridional flow -it is eddy-driven as indicated by the wavy arrows at intermediate depths on the frontside of Fig. 27. Below in the blocked range of isopycnals we have geostrophic meridional flow in the valleys between the submarine ridges, these are supported by the pressure gradients associated with the bottom form stress. And above the eddy-driven regime we find the northward Ekman flow which is driven by the eastward zonal wind. Clearly, the balance of momentum and the forces driving the meridional overturning circulation correspond to each other (in a mathematical frame they are described by the same equations).
The isopycnals in the Southern Ocean connect the deep ocean to the north of the ACC to the surface areas to the south, the ACC being attached to stronger tilts correlated over the depth. We have turned existing concepts on processes shaping the isopycnal stack in the Southern Ocean into a prognostic theory. Also for these processes we refer to Fig. 27. The Eulerian mean flow and the eddies combine to transport density (heat and substances) to and from the upper ocean layer where mixing by small-scale turbulence and exchange of heat and freshwater with the atmosphere must occur. Two assumptions imply that the mean transport in the interior -below the mixed layer -by the mean flow and eddies is entirely along the isopycnals. These are: eddies do not carry stuff across isopycnals, only along them; diapycnal mixing by small-scale turbulence is absent below the mixed layer. Then streamlines of transport by mean flow and eddies, representing the residual circulation, coincide with isopycnals. The concept allows the complicated mathematics of an advection-diffusionmixing regime to be broken into manageable parts -mixed layer physics with Ekman and eddy advection and a diabatic interior -which may be solved by simpler means. We are able to predict the density field -the decrease of surface density with increasing latitude and the shape of the downward sloping isopycnals -from the wind field and buoyancy flux through the ocean surface. The theory applies to an ACC path following average and thus is rather qualitative but it demonstrates the overwhelming importance of the transient eddy field in shaping the density field in the Southern Ocean.
The last concern of our review is the issue of transport of the ACC. Over many decades the current was considered basically wind-driven and its transport attributed to the direct action of the wind over the Southern Ocean as the most prominent driving agent, though early experiments with global OGCMs clearly showed the importance of baroclinicity on the ACC transport. A more detailed transport theory is recapitulated below in two steps.
If a mean stratification is considered as a given horizontally uniform background (a mean Brunt-Vaisala frequency profile or an isopycnal layer stack) the effect of the wind stress is not limited to a direct driving of currents by friction but there is in addition the Ekman pumping acting to deform the stratification. Together with a prescribed input of buoyancy (say, cooling in the south and heating in the north) this Ekman pumping sets up a baroclinic pressure force which gives rise to a baroclinic component of the current. Feedback by bottom form stress (the Rossby wave connection mentioned above) directly influences the zonal transport. In summary, the ACC transport has a direct contribution from the wind stress, an 'indirect' contribution from the Ekman pumping on the mean stratification, and a contribution from the prescribed surface buoyancy flux. The latter two contributions only appear if the topography is undulated (if bottom form stress may act). Our simple linear transport model elucidates these mechanisms and clearly shows that for sufficiently high topography amplitudes the indirect wind forcing and the external heating may be dominant over the direct wind effect.
In a second stage we realize that the stratification is not given but depends on many processes, among them also features of the local wind stress and surface buoyancy flux not considered so far. Non-zonal pattern in these fluxes (e.g. more cooling in the Atlantic sector of the Southern Ocean than elsewhere) may generate zonal pressure forces in the bottom form stress and influence the transport. There is also the possibility of remote control via the NADW/conveyor belt connection. There are other second order effects such as the influence of the winds on turbulence and mixing and regional differences in the transient eddy energy. We did not attempt to quantify these effects on the ACC transport.
The concepts and models discussed in this review would greatly benefit from extended knowledge in some specific areas. From the experimental and field work community we need information about the level and regional distribution of diapycnal mixing in the area south of the ACC, in particular where in the water column it occurs predominantly. Furthermore we would profit from better knowledge about the parametrization of isopycnal eddy fluxes which certainly needs a combination of theoretical work and eddyresolving modelling efforts in idealized and realistic configurations. Finally, we must admit that many of the considerations in this review are based on relatively simple models and model diagnostics. We certainly would profit from improved diagnostic utilization of realistic eddyresolving models toward an understanding of mechanisms outlined in this article, e.g. the establishment of form stress in the interior and at the bottom by waves and eddies. | 23,651.6 | 2004-11-30T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Real-time dual-modal photoacoustic and fluorescence small animal imaging
By combining optical absorption contrast and acoustic resolution, photoacoustic imaging (PAI) has broken the barrier in depth for high-resolution optical imaging. Meanwhile, Fluorescence imaging (FLI), owing to advantages of high sensitivity and high specificity with abundant fluorescence agents and proteins, has always been playing a key role in live animal studies. Based on different optical contrast mechanisms, PAI and FLI can provide important complementary information to each other. In this work, we uniquely designed a Photoacoustic-Fluorescence (PA-FL) imaging system that provides real-time dual modality imaging, in which a half-ring ultrasonic array is employed for high quality PA tomography and a specially designed optical window allows simultaneous whole-body fluorescence imaging. The performance of this dual modality system was demonstrated in live animal studies, including real-time monitoring of perfusion and metabolic processes of fluorescent dyes. Our study indicates that the PA-FL imaging system has unique potential for live small animal research.
Introduction
Small animal models, especially rodents, play an important role for life science and pre-clinical research.Many noninvasive imaging technologies have been developed for small animal research.Among various imaging methods, the live animal fluorescence imaging (FLI) is mostly widely employed owing to its superior advantages, including noninvasive and non-ionizing mechanisms, high sensitivity and specificity, abundant molecular tracers, as well as real-time imaging over large field of view (FOV) [1][2][3][4].However, the strong tissue light scattering substantially limits the image resolution for FLI in deep tissues.Over the past decade, the photoacoustic imaging (PAI), which uniquely combines optical absorption contrast and ultrasonic detection [5], becomes a powerful method for live animal imaging with high resolution at unprecedented depth [6][7][8][9][10][11][12][13][14][15][16][17].
Besides its superior in structural and functional imaging of blood vessels, many more PA contrast agents and molecular tracers are being intensively explored.Notably, most of these agents are poor fluorophores [18].Therefore, based on different imaging contrast, imaging depth and spatial resolution, PAI and FLI can provide important complementary information to each other.Therefore, it is of great value to explore the integration of PAI with FLI for live animal studies.Several studies have reported their efforts to combine PAI results with FLI results for small animal research [19][20][21][22].These integrated systems either performed PAI and FLI in a sequential way or take long time to finish dual modal imaging, which are suitable for monitoring relatively slow changes.However, the real-time synchronization for PAI and FLI is also important.It is because not only the body motion due to heart beating and breathing could affect the spatial registration of these two modalities, but also there are transient physiological processes, such as intravenous drug delivery or perfusion in kidney [23,24], all of which demands real-time synchronous imaging.Here, we report a real-time PA-FL dual modality imaging method, which is equipped with an ultrasonic array to provide real-time PAI two-dimension (2D) imaging, and the simultaneous whole-body FLI is acquired through an optical window in the PAI system.We successfully performed in vivo real-time imaging of mice, presenting the fluorescence dyes perfusion and metabolic process not only over the whole-body by FLI but also at unprecedented depth and resolution by PAI.Our results demonstrate the system has great potential for small animal studies, including drug delivery [25], dynamic metabolic molecular tracing, and monitoring other fast physiological or pathological processes.
Dual-modal imaging system
The PAI-FLI system consists of two subsystems: FLI subsystem and PAI subsystem.As shown in Fig. 1 coupling ultrasound (US) signal to the half-ring detector, and the animal body were immersed into water during imaging with its head out of water attaching to a gas mask.
In order to simultaneously perform whole-body FLI, the water tank has an optical transparent window in the opposite of the US array, allowing both FL excitation and emission acquisition, as shown in Fig. 1.A 1-to-2 fiber bundle is used to excite FL signal via illuminating through the optical window, and the emitted FL signal was reflected by a tilted mirror (45 degrees) into a FL camera (Hamamatsu Flash 4.0, Japan).A florescence filter is mounted before the camera's lens.In the following experiment, a 785 nm continuous laser (CNI MDL-III-785, China) was used to excite FL signal.
To synchronize PAI and FLI, when the OPO laser emits a laser pulsed, a photodetector (Thorlabs DET100A2) detects the laser pulse and generate a synchronizing trigger signal to both DAQ and FL camera to acquire data.To avoid possible photons from strong pulsed laser into the camera, we set a 10-ms delay time for camera to start expose.Camera exposure time set to 70 ms.During the imaging process, the small animal is vertically fixed on an animal holder into the water tank.The water temperature is maintained at 36 • C, and an electrical translational stage can move the animal body up and down to obtain photoacoustic images at arbitrary body slice.During in vivo experiments, an air mask covers the animal nose with 1.5% vaporized isoflurane gas.
System characteristics
We used a human hair with the diameter of ~90 µm for PAI system resolution testing.The hairline was placed vertically at the center of the half-ring ultrasound array and the measured resolution at the center point is ~150 µm, as shown in Fig. 2(a-d).The PAI image reconstruction is accelerated by a customized GPU algorithm, achieving a reconstruction time less than 30 ms (GPU: RTX3060 Ti).Therefore, the PA imaging can be displayed in real time.Regarding the Fluorescence Imaging (FLI) setup, we employed a scientific CMOS camera (Hamamatsu Flash 4.0, Japan), equipped with a sensor resolution of 2048 by 2048 pixels.The camera, featuring a lens with a focal length of 55 cm, provides a field of view (FOV) of approximately 11 cm by 11 cm.This FOV is sufficient to cover the whole body of a small animal.For resolution assessment, as depicted in Fig. 2(e), we used FLI to image a resolution chart (Thorlabs R3L3S1P) that was placed at the intended location to fix the animal.The highest distinguishable pair of horizontal and vertical lines, as enclosed in the yellow box in Fig. 2(e), correspond to the second element of group three (8.98 line pairs per millimeter, lp/mm).This delineation indicates the maximal spatial resolution attainable by our fluorescence imaging system.
In vivo photoacoustic imaging
To demonstrate the PAI imaging performance, we used the 1064 nm laser to excite PA signal.A ~20 g nude Balb/C mouse was used, which had a trunk diameter of approximately 16 mm.The total energy illuminated onto the body surface is 36 mJ, and the calculated fluency is 12 mJ/cm 2 , which is far less than the ANSI safety limit (100 mJ/cm 2 at 1064 nm) [5].Fig. 3 shows two cross-sectional (B-scan) PA imaging results, which clearly show the structure of multiple organs and tissues in this mouse, including abdominal aorta, intestinal tract, inferior vena cava, left and kidneys, left and right lobes of liver, portal vein, spinal cord, spleen and stomach.Besides the original reconstructed results, we also showed results after Frangi vascular filtering to enhance the display of vascular networks.From the imaging results, it can be seen that our half-ring PAI system has excellent imaging capability for small animal bodies.
Dual-modal in vivo real-time imaging of fast dye perfusion process
For this study.We injected the indocyanine green (ICG) dye into a mouse via tail vein injection.It is known that ICG does not goes through renal metabolism, it only pass the kidney vasculature quickly via blood circulation.During the entire ICG perfusion process, the dual-modal system simultaneously performs real-time synchronous FL and PA imaging, as shown in Fig. 4. We used red pseudo color to indicate the dynamic ICG signal overlapping on PA structural grayscale imaging.The dynamic ICG signal is the result of subtraction of photoacoustic images, obtained by subtracting the baseline photoacoustic image (acquired before ICG injection) from each registered photoacoustic image (obtained after ICG injection).
Fig. 4 shows the fluorescence and photoacoustic images collected at different times within 10 s after the start of tail vein injection.The photoacoustic (PA) image is displayed on the left, with the corresponding fluorescence (FL) image shown on the right.We adjusted the vertical position to make PA imaging show the cross section along the kidney location, as indicated by the white line overlapped on FLI results, while the FLI shows whole-body imaging.As shown in Fig. 4(c-d), the yellow boxes in the right fluorescent image correspond to the positions of the left and right kidneys, respectively.According to PAI results, ICG dye flowed passing the inferior vena cava at about 2.7 s toward the heart, and flowed into the kidney from the thoracic aorta in about 6.3 s, then ICG dye perfused into the entire kidney vascular network in about 8.2 s.Evidenced by the data depicted in Fig. 4(e), there is a simultaneous enhancement of the fluorescence signal intensity and the photoacoustic signal intensity of ICG at the location of the corresponding kidney.The signal-to-noise ratio (SNR) for the ICG photoacoustic signal is measured at 27 dB and the SNR for the fluorescence signal is 64 dB.The above results indicate that PA image demonstrates superior spatial resolution within the cross-sectional area of the kidney and the FL image offers whole-body images concurrently and exhibits heightened sensitivity.It is worth noting that at the moment of 2.7 s, although strong PA signal indicating the arrival of ICG to the inferior vena cava, there is very weak FL signal due to strong tissue light scattering that substantially attenuated both FL excitation and emission photons.More results were provided in the online animation movie (Media 1).
Supplementary material related to this article can be found online at doi:10.1016/j.pacs.2024.100593.
Dual-modal in vivo imaging of dye metabolism
In this study, we continued to monitor the metabolism process of ICG as a demonstration.For a healthy mouse, the injected ICG will goes through liver to gallbladder, and then enter intestines to be expelled out of body.In this study, we injected a certain concentration of ICG solution through the tail vein and monitored the signal changes in the liver and intestines within one hour after injection.
Since we need to monitor several locations, a motorized translational stage was used to change the mouse's vertical position, and PAI scanned the mouse body at a step size of 0.1 mm over 30 mm, costing about 60 s to finish one cycle.Fig. 5 demonstrated two typical cross sections for liver and intestine, and simultaneous whole-body FLI was performed side by side.After injection, much of the ICG is absorbed by the liver within ~10 min, which substantially blocked PA excitation light (790 nm) that lead an obviously decrease in image quality for deep organs.Fig. 5(c) is the result for 12.5 mm below the liver, which is intestine.It clearly shows that the much ICG reaches intestine within ~30 min.The simultaneous FLI results shows the whole-body dynamics of FL signals, which is in overall consistent with PAI results.Our results are also consistent with the cognition of the metabolic pathway of ICG.More results are provided in online animation movie (Media 2-3).The above experiments prove that the system has the ability to observe the dynamic whole-body metabolic process of fluorescent dyes in small living animals.
Supplementary material related to this article can be found online at doi:10.1016/j.pacs.2024.100593.
Animal preparation and experiment
All experimental procedures were carried out in conformity with the laboratory animal protocols approved by the Animal Research Committee of Peking University.Nude mice aged 8 to 12 weeks (Balb/c nude, Charles river; ~20 g body weight, male) were used for in vivo imaging and ICG experiments.The food supply was stopped 12 h before the experiment, but the water supply was maintained, which was used to empty the metabolites in the intestine.Before the experiment, depilatory cream was first used to remove the sparse hair of the mice, reducing the photoacoustic signal from the epidermis.Throughout the experiment, mouse was maintained under anesthesia with 1.5% vaporized isoflurane.The animal body was immersed in water tank filled with deionized water and maintain at around 36 • C. The motor controls the small animal holder to scan along the z-axis at a constant speed, enabling photoacoustic imaging at any position.In the body structural imaging experiment of Fig. 3, the laser wavelength was 1064 nm, the repetition rate was 10 Hz, and the illumination intensity of the laser was 12 mJ/ cm 2 , which is far below the ANSI laser safety standard limit.In the ICG experiment, the wavelength of the laser used was 790 nm, the repetition rate was 10 Hz, and the illumination intensity of the laser was 8 mJ/ cm 2 , which is also far below the ANSI laser safety standard limit.The ICG was injected via a self-made tail vein indwelling needle, which allows the injection of dyes while the animal body was immersed.The concentrations of ICG solution used in perfusion and metabolic path way experiments are 5 mg/ml (total 125 μg) and 0.3 mg/ml (total 30 μg), respectively.
Data processing and image reconstruction
For in vivo animal imaging, we used the dual-speed-sound backprojection algorithm implemented in MATLAB to reconstruct photoacoustic images [14,26].In vivo photoacoustic imaging, as shown in Fig. 3(a)(c),we processed photoacoustic images to improve the contrast through the following steps:(1) used a high-pass filter with a passband frequency of 0.8 MHz and a stopband frequency of 0.3 MHz to enhance vascular image.(2) Used contrast limited adaptive histogram equalization(CLAHE) to enhance contrast [27].As shown in Fig. 3(b)(d), we applied a set of Hessian-based Frangi vascular filters to enhance the display of vascular networks [28].The data processing steps are as follows:(1) Used a high-pass filter with a passband frequency of 0. In the dye metabolism experiment, only a high pass filter with a passband frequency of 0.1 MHz and a stopband frequency of 0.05 MHz is used to suppress the DC bias and low-frequency noise from the amplifier circuit.In the dye perfusion experiment, still use the above filter to filter low-frequency.We acquired photoacoustic imaging data both pre-injection and post-injection.The datasets were then aligned using the Demons registration algorithm [29], resulting in the grayscale photoacoustic images shown in Fig. 4. To obtain the dynamic indocyanine green (ICG) photoacoustic signals, we subtracted the baseline photoacoustic image (acquired before ICG injection) from each registered image obtained thereafter.The ICG photoacoustic images were subsequently processed with a temporal low-pass filter, which has a cutoff frequency of 1 Hz, in order to mitigate heartbeat-induced amplitude fluctuations in the photoacoustic signal.This filtering step is based on the understanding that the kinetics of ICG perfusion within the tissue are considerably slower than the rate of the mouse's cardiac cycle.To clearly demonstrate the renal perfusion process, only the dynamic ICG photoacoustic signals of the kidney and its surrounding major blood vessels are displayed as red pseudo colors in Fig. 4, while ignoring PA signal changes in other organs [23].As shown in Fig. 4(e), the average photoacoustic signal of indocyanine green (ICG) within the region outlined by the blue box showed a change over time, as did the average fluorescence signal within the region designated by the yellow box.The size of the blue box is 2.0 mm × 1.0 mm and the size of the yellow box is 2.0 mm × 0.5 mm.These two regions correspond in spatial position.
Dual-mode synchronization scheme
We used a 10 Hz OPO laser with a repetition period of 100 ms to excite the PA signal, while a 785 nm CW laser was used to excite FL signal.When a photodetector detects the pulse laser from the OPO laser, it emits a pulse signal to trigger both the fluorescence camera and the Data Acquisition System (DAQ).Once the trigger is received, the PA's DAQ system captures 2048 data points at 40 MHz, completing in 51.2 μs.However, the fluorescent camera will delay for 10 ms to start exposure, which is to avoid the fluorescence imaging being influenced by the intense OPO laser light.The camera's exposure time is set to 70 ms, as shown in Fig. 6.
Discussion and Conclusion
In this work, we developed a real-time PA-FL dual-modality imaging system, employing a half-ring ultrasonic array that enables real-time 2D PAI, in conjunction with whole-body FLI through an optical window.This novel imaging system successfully achieved in vivo real-time imaging of the perfusion and metabolic processes of fluorescent dyes in mice.The system integrates the advantages of FLI and PAI, in which FLI has a real-time large FOV of whole-body coverage and high sensitivity, and PAI offers high resolution in deep tissue imaging and provides tissue structural information.The real-time synchronous imaging abilities of the dual-modality fluorescence and photoacoustic system unlock novel avenues for small animal research and preclinical studies, such as monitoring the whole-body dynamic drug delivery, tracing cancer cells that are labeled by dual-modal molecular tracers, studying neurovascular coupling in both central and peripheral nervous systems.
In future work, multi-wavelength PAI will be implemented to provide more functional and structural information, including oxygen saturation and tissue composition.Besides system upgrade, we will also optimize reconstruction algorithms to alleviate artifacts caused by the limited view of the half-ring configuration, including advanced iterative reconstruction methods and artificial intelligent aided methods.In conclusion, our work provides an unique real-time dual-modality system that integrates two powerful small animal imaging methods, photoacoustic imaging and fluorescence imaging, providing a platform to perform various dynamic molecular and functional imaging studies.
Declaration of Competing Interest
We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work.there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled "Real-time dual-modal photoacoustic and fluorescence small animal imaging system".
, The PAI system has a customized 256-element half-ring ultrasonic transducer array (by ULSO TECH Inc, China), with central frequency of 5.0 MHz and one-way bandwidth of 80%.The half-ring has a diameter of 100 mm, and each array element is cylindrically focused in the elevational direction with a focal length of 40 mm.To maximize the signal to noise ratio (SNR), a self-developed 256-channel preamplifier (40 dB gain) is directly connected with the ultrasonic transducer array.Then, the amplified PA signals are parallelly received by a 256-channel data acquisition (DAQ) instrument (Marsonics: DAQ, Tsingpai Tech-Co, China, 6 dB gain) at 40 MHz sampling rate.The PA signal is excited by an Optical Parametric Oscillator (OPO) pulsed laser (Innolas SpitLight 600, Germany) with a repetition rate of 10 Hz.The laser is coupled into a 1-to-10 fiber bundle, and each branch end has a rectangle shape of 1 × 7 mm.The branch ends are evenly distributed around the imaged target, forming a nearly uniform circular illumination pattern, as shown in Fig. 1.A water tank is used for
Fig. 1 .
Fig. 1.Dual-modal imaging system.(a) 3D schematic diagram of the system; (b) Illustration of both light and sound path ways.
Fig. 2 .
Fig. 2. PAI-FI system resolution testing results.(a) Photoacoustic image of a human hair; (b) Schematic diagram of X and Y axis; (c) X-direction PA resolution; (d) Ydirection PA resolution;(e) Camera resolution chart under fluorescence imaging system;.
Fig. 4 .
Fig. 4. Fluorescence and photoacoustic images at different times in renal perfusion experiments.(a-d) After injection of ICG into the tail vein, photoacoustic and fluorescence images at t = 0.0 s, 2.7 s, 6.3 s, 8.8 s.The left side is the photoacoustic image, the right side is the fluorescence image.(e)The average photoacoustic signal of indocyanine green (ICG) within the region outlined by the blue box showed a change over time, as did the average fluorescence signal within the region designated by the yellow box.
Fig. 5 .
Fig. 5. Photoacoustic and fluorescence images at different times in ICG metabolic experiments (a) Photoacoustic and fluorescence image in liver before ICG injection; (b) PA and FL image of the liver 10 min after ICG injection; (c) Photoacoustic and fluorescence image in intestines before ICG injection; (d) PA and FL image in intestines 30 min after ICG injection.
8
MHz and a stopband frequency of 0.3 MHz to suppress the low-frequency.(2) Setting PA images negative values to zero, restricting all PA images values to the non-negative range.(3) Applying the Frangi filter to PA images that consist only of non-negative values.(4)Used contrast limited adaptive histogram equalization (CLAHE) after Frangi vascular filters.
Y
. Sun et al. | 4,687.2 | 2024-02-01T00:00:00.000 | [
"Engineering",
"Medicine",
"Physics"
] |